url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://electronics.stackexchange.com/questions/76948/what-does-configuration-refer-to-in-pci-and-pcie-how-is-this-different-from | # What does "configuration" refer to in PCI and PCIe? How is this different from "Enumeration"
I am not being able to find a clear description of what configuration means in PCI and PCIe. I have found something called as configuration space, but without knowing what configuration means, it is not possible to really understand what configuration space is.
So what does configuration and a configuration space in PCI and PCIe mean? And how is this different from "Enumeration"?
Each PCI device (when I write PCI, I refer to PCI 3.0, as opposed to PCIe) has two "ranges" - configuration range (CFG) and "memory mapped input-output" range (MMIO). I won't deep dive into the concepts of address spaces and MMIO because it will make the answer too long and complicated. Google them if they are not familiar to you. In short: CFG range is a standard set of registers used to configure the PCI device; MMIO range is a customary set of registers. In other words: CFG ranges are the same across all PCI devices (there might be slight differences, but the majority of registers are standard); MMIO ranges are device specific (NOTE: while the terms "range" and "space" are not synonyms, there is a consensus to call an MMIO range of the device MMIO space. I'll use them interchangeably)
Now, the size of CFG space is standard - there is an upper bound on the number of registers CFG space can contain and it is the same for each PCI device. Usually, the actual number of registers in CFG space is much smaller than the maximal. The size of MMIO space, on the other hand, is not constant. Why? Well, different devices need different number of registers for communication.
Now think about it for a moment: if the size of MMIO space is not constant, then we need to provide the information about this size of a particular device to the computer in some way, right? One option would be to manually define these parameters for each device. It is the way the early computers worked: you really had to configure each device you plug into a computer by hand. Today we are lazy and want the "plug-and-play" functionality - the computer must obtain this info by itself the moment a new device is added.
In order to allow for "plug-and-play" in PCI devices, the concept of MMIO Base Access Registers (MMIO BARs) was introduced. These registers reside in CFG space of each device (I think that there up to five BARs per CFG space are allowed). The flow is as follows:
1. a computer knows to search for these registers during a startup
2. a computer reads the BARs in order to understand what sizes of MMIO ranges does this device require
3. a computer allocates the device's MMIO spaces, which become standard MMIO ranges in the global MMIO space
4. a computer writes back to MMIO BARs the addresses assigned for each device's MMIO range in the global MMIO space.
The above 4 stages are known as "enumeration" of the device - i think that it is usually the BIOS who performs devices' enumeration during the startup.
Except for BARs, CFG space of a device contain many more registers. All other writes to CFG space, which is not part of "enumeration" flow, are called "configuration" of the device. This includes runtime configurations such as: interrupt selection, MSI vectors and addresses, device's power states and many more.
In summary: "enumeration" is the flow executed at startup which allocates MMIO ranges to all the devices. "configuration" is all other writes to CFG space of the device (in general, "enumeration" is included in "configuration").
NOTE: this description is simplified. There are more aspects both to "enumeration" and to "configuration".
Offtopic:
It seems that you are new to PCIe and want to get a fast introduction. While your desire is totally understandable (it will take you a way more time to understand PCIe if you don't ask questions), I believe that nothing can replace the original spec. The problem is that PCIe spec is written in the manner which assumes that you're familiar with PCI (at least it seemed that way to me) - you need to read it first. So, start by googling PCI 3.0 specification and PCIe 2.1 specification. These documents are frustratingly long, but they'll become your bible if you're really going to work with PCIe devices.
• The first paragraph belongs in a comment on the question.
– Mels
Jul 25 '13 at 12:14
• Dear Vasiliy, I have the PCIe 3.0 specs with me but they did not seem to describe what configuration itself means. They just went into it as "... these are the configuration registers" "... this is the configuration space". I was like, ok, but what is configuration. I could not find answer to this question anywhere. I also have 2 books PCI demystified and PCI system architecture by mindshare. If I do post any more questions, it will be just 1 more. But first I will try to find the answer myself. I only asked question if things were not clear in the documents. Thank you very much for your time. Jul 25 '13 at 12:19
• All I need is direction, I would not expect anyone to pour a whole white paper in response to my question. After all everyone is quite busy. Thank you. Jul 25 '13 at 12:20
• @quantum231, no offense man! I didn't mean to say that your question is unnecessary or bad. If I did think this way - no way I would spend my time on it. All I wanted to say is that you must read the specs. If you already do - way to go! I myself spent a lot of time reading them, and I know perfectly well that a bit of explanations could speed up the process dramatically. Jul 25 '13 at 12:23
• thank you so much, from this day onwards you are my sensei. Jul 26 '13 at 8:27 | 2022-01-19 14:38:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.511883020401001, "perplexity": 1176.9512739628076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301341.12/warc/CC-MAIN-20220119125003-20220119155003-00216.warc.gz"} |
https://math.hecker.org/2011/06/25/multiplying-block-diagonal-matrices/ | ## Multiplying block diagonal matrices
In a previous post I discussed the general problem of multiplying block matrices (i.e., matrices partitioned into multiple submatrices). I then discussed block diagonal matrices (i.e., block matrices in which the off-diagonal submatrices are zero) and in a multipart series of posts showed that we can uniquely and maximally partition any square matrix into block diagonal form. (See part 1, part 2, part 3, part 4, and part 5.) With this as background I now discuss the general problem of multiplying two block diagonal matrices. In particular I want to prove the following claim:
If $A$ and $B$ are $n$ by $n$ square matrices identically partitioned into block diagonal form:
$A = \begin{bmatrix} A_{1}&0&\cdots&0 \\ 0&A_{2}&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\cdots&A_{q} \end{bmatrix} \qquad B = \begin{bmatrix} B_{1}&0&\cdots&0 \\ 0&B_{2}&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\cdots&B_{q} \end{bmatrix}$
then their product $C = AB$ is also a block diagonal matrix, identically partitioned to $A$ and $B$:
$C = \begin{bmatrix} C_{1}&0&\cdots&0 \\ 0&C_{2}&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\cdots&C_{q} \end{bmatrix}$
with $C_{\alpha} = A_{\alpha}B_{\alpha}$.
Proof: Let $A$ and $B$ be $n$ by $n$ square matrices identically partitioned into block diagonal form with $q$ row and column partitions. In our framework identically partitioned means that the $q$ partitions of $A$ and $B$ can be described by a partition vector $u$ of length $q+1$, with both $A_{\alpha}$ and $B_{\alpha}$ containing $u_{\alpha+1} - u_{\alpha}$ rows and columns.
From the previous discussion on multiplying block matrices we know that the $n$ by $n$ matrix product $C = AB$ can be described as a block matrix with $q$ row partitions and $q$ column partitions:
$C = \begin{bmatrix} C_{11}&C_{12}&\cdots&C_{1q} \\ C_{21}&C_{22}&\cdots&C_{2q} \\ \vdots&\vdots&\ddots&\vdots \\ C_{q1}&C_{q2}&\cdots&C_{qq} \end{bmatrix}$
with submatrices computed as follows:
$C_{\alpha \beta} = \sum_{\gamma = 1}^{q} A_{\alpha \gamma} B_{\gamma \beta}$
Note that since $A_{\alpha \gamma}$ contains $u_{\alpha+1} - u_\alpha$ rows and $u_{\gamma+1} - u_\gamma$ columns, and $B_{\gamma \beta}$ contains $u_{\gamma+1} - u_\gamma$ rows and $u_{\beta+1} - u_\beta$ columns, $C_{\alpha \beta}$ contains $u_{\alpha+1} - u_\alpha$ rows and $u_{\beta+1} - u_\beta$ columns.
We can rewrite the above expression for $C_{\alpha \beta}$ as follows:
$C_{\alpha \beta} = \sum_{\gamma = 1}^{\alpha-1} A_{\alpha \gamma} B_{\gamma \beta} + A_{\alpha \alpha}B_{\alpha \beta} + \sum_{\gamma = \alpha+1}^{q} A_{\alpha \gamma} B_{\gamma \beta}$
For both sums we have $\gamma \ne \alpha$ for all terms in the sums, and since $A$ is in block diagonal form we have $A_{\alpha \gamma} = 0$ for all terms in the sums, so that
$C_{\alpha \beta} = \sum_{\gamma = 1}^{\alpha-1} 0 \cdot B_{\gamma \beta} + A_{\alpha \alpha}B_{\alpha \beta} + \sum_{\gamma = \alpha+1}^{q} 0 \cdot B_{\gamma \beta} = A_{\alpha \alpha}B_{\alpha \beta}$
Since $B$ is also in block diagonal form, if $\alpha \ne \beta$ we have $B_{\alpha \beta} = 0$ and
$C_{\alpha \beta} = A_{\alpha \alpha}B_{\alpha \beta} = A_{\alpha \alpha} \cdot 0 = 0$
Since $C_{\alpha \beta} = 0$ if $\alpha \ne \beta$, $C$ is also in block diagonal form.
We then have $C_{\alpha \alpha} = A_{\alpha \alpha}B_{\alpha \alpha}$ or in our shorthand notation $C_{\alpha} = A_{\alpha}B_{\alpha}$ so that
$C = AB = \begin{bmatrix} C_{1}&0&\cdots&0 \\ 0&C_{2}&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\cdots&C_{q} \end{bmatrix} = \begin{bmatrix} A_{1}B_{1}&0&\cdots&0 \\ 0&A_{2}B_{2}&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\cdots&A_{q}B_{q} \end{bmatrix}$
Note that if $A$ and $B$ are in maximal block diagonal form with only one partition then $A = A_1$ and $B = B_1$ so that this reduces to $C_1 = A_1B_1 = AB = C$.
On the other hand, if $A$ and $B$ are in maximal block diagonal form with $n$ partitions, such that
$A = \begin{bmatrix} a_1&0&\cdots&0 \\ 0&a_2&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\cdots&a_n \end{bmatrix} \qquad B = \begin{bmatrix} b_1&0&\cdots&0 \\ 0&b_2&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\cdots&b_n \end{bmatrix}$
then $A_\alpha = \begin{bmatrix} a_{\alpha} \end{bmatrix}$ and $B_\alpha = \begin{bmatrix} b_{\alpha} \end{bmatrix}$ so that
$C = AB = \begin{bmatrix} C_{1}&0&\cdots&0 \\ 0&C_{2}&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\cdots&C_{n} \end{bmatrix} = \begin{bmatrix} a_1b_1&0&\cdots&0 \\ 0&a_2b_2&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\cdots&a_nb_n \end{bmatrix}$
In my next post I discuss inverting block diagonal matrices.
This entry was posted in linear algebra. Bookmark the permalink.
### One Response to Multiplying block diagonal matrices
1. Varun Reddy says:
Elegant proof! | 2018-06-24 07:08:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 67, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888168215751648, "perplexity": 221.62173112995015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866888.20/warc/CC-MAIN-20180624063553-20180624083553-00463.warc.gz"} |
http://mathhelpforum.com/trigonometry/6977-periodic-function.html | # Math Help - Periodic Function
1. ## Periodic Function
Hey can anyone help me with this Q
Mr Johnston is a very keen fisherman who will go fishing at any time of the day or night. He requires 1.5 metres of water at the boat ramp (ie. 1.5 m above the mean sea level) to launch his boat the "Periodic Function". The next high tide occurs at 2am, at a height of 2.8m above the mean sea level
a) Find the times in teh first 24 hours after 2am when he is able to launch his boat
thanx heaps
2. Originally Posted by needmathshelp
Hey can anyone help me with this Q
Mr Johnston is a very keen fisherman who will go fishing at any time of the day or night. He requires 1.5 metres of water at the boat ramp (ie. 1.5 m above the mean sea level) to launch his boat the "Periodic Function". The next high tide occurs at 2am, at a height of 2.8m above the mean sea level
a) Find the times in teh first 24 hours after 2am when he is able to launch his boat
thanx heaps
Hi,
I assume that the height of the water level can be described approximately by a cos-function. (In real life this assumption is not true)
In 24 hours you have 2 high tides and 2 low tides. If one high tide is at 2 am, then there is another at 4 pm and the next at 2 am, and so on...
The maximum height is 2.8 m at 2 am. Screw all these parts together and you'll get the depth of water as a function with respect to time t:
$d(t)=2.8 \cdot \cos\left(\frac{\pi}{6} \left(t-2 \right) \right)$
Now you know that d(t) >= 1.5. That means you have to solve the equation for t:
$1.5 \leq 2.8 \cdot \cos\left(\frac{\pi}{6} \left(t-2 \right) \right)$
I got: 2 am < t < 3.92 am or 0.08 pm < t < 3.92 am or 0.08 am < t < 2 am
EB
I've attached an image to sho you the graph of the function. | 2014-11-28 10:03:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4937456250190735, "perplexity": 996.6980045462068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009968.66/warc/CC-MAIN-20141125155649-00130-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://docs.nebula-graph.io/3.4.1/3.ngql-guide/3.data-types/3.string/ | # String¶
Fixed-length strings and variable-length strings are supported.
## Declaration and literal representation¶
The string type is declared with the keywords of:
• STRING: Variable-length strings.
• FIXED_STRING(<length>): Fixed-length strings. <length> is the length of the string, such as FIXED_STRING(32).
A string type is used to store a sequence of characters (text). The literal constant is a sequence of characters of any length surrounded by double or single quotes. For example, "Hello, Cooper" or 'Hello, Cooper'.
Nebula Graph supports using string types in the following ways:
• Define the data type of VID as a fixed-length string.
• Set the variable-length string as the Schema name, including the names of the graph space, tag, edge type, and property.
• Define the data type of the property as a fixed-length or variable-length string.
For example:
• Define the data type of the property as a fixed-length string
nebula> CREATE TAG IF NOT EXISTS t1 (p1 FIXED_STRING(10));
• Define the data type of the property as a variable-length string
nebula> CREATE TAG IF NOT EXISTS t2 (p2 STRING);
When the fixed-length string you try to write exceeds the length limit:
• If the fixed-length string is a property, the writing will succeed, and NebulaGraph will truncate the string and only store the part that meets the length limit.
• If the fixed-length string is a VID, the writing will fail and NebulaGraph will return an error.
## Escape characters¶
Line breaks are not allowed in a string. Escape characters are supported within strings, for example:
• "\n\t\r\b\f"
• "\110ello world"
## OpenCypher compatibility¶
There are some tiny differences between openCypher and Cypher, as well as nGQL. The following is what openCypher requires. Single quotes cannot be converted to double quotes.
# File: Literals.feature
Feature: Literals
Background:
Given any graph
Scenario: Return a single-quoted string
When executing query:
"""
RETURN '' AS literal
"""
Then the result should be, in any order:
| literal |
| '' | # Note: it should return single-quotes as openCypher required.
And no side effects
While Cypher accepts both single quotes and double quotes as the return results. nGQL follows the Cypher way.
nebula > YIELD '' AS quote1, "" AS quote2, "'" AS quote3, '"' AS quote4
+--------+--------+--------+--------+
| quote1 | quote2 | quote3 | quote4 |
+--------+--------+--------+--------+
| "" | "" | "'" | """ |
+--------+--------+--------+--------+
Last update: March 22, 2023 | 2023-03-22 18:22:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29226839542388916, "perplexity": 10055.004586657371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00293.warc.gz"} |
https://paperswithcode.com/method/generalized-focal-loss | Loss Functions
# Generalized Focal Loss
Introduced by Li et al. in Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection
Generalized Focal Loss (GFL) is a loss function for object detection that combines Quality Focal Loss and Distribution Focal Loss into a general form.
#### Papers
Paper Code Results Date Stars | 2023-03-26 18:12:13 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9080684185028076, "perplexity": 13036.825242447281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00700.warc.gz"} |
https://aviation.stackexchange.com/questions/67995/if-the-efficiency-of-a-turbofan-engine-is-35-where-does-the-rest-of-the-fuel-e | # If the efficiency of a turbofan engine is 35%, where does the rest of the fuel energy go?
There have been a couple of questions on this site about the efficiency and propulsive power of turbine engines (here, here, here).
In a high bypass turbofan engine, what are the losses in the different stages? Where does the unutilised energy go, and what exactly is left over to make the aeroplane fly?
There are a couple of sources of loss throughout the process, as indicated in the figure from an old paper format uni book. I've had to translate the labelling, open to suggestions there. The percentages are valid for a high bypass turbofan manufactured in the late 80s.
1. The total energy input starts with the fuel flow: chemical energy per second.
2. Combustion converts the chemical energy into a heat flow pretty successfully, with about 1% lost in incomplete combustion. This takes place in the combustion chamber.
3. The turbine extracts mechanical energy from the heat flow, and uses a portion of it to power the compressor. The thermodynamic efficiency of the Carnot cycle determines the resulting power fraction. I've labelled the resulting net mechanical power as Gas Power which sounds a bit dicey. This Gas Power can be converted into shaft power and/or into jet power, depending on the type of turbine engine.
Note that the thermodynamic efficiency depends on the inflow speed of the air into the combustion chamber: it is decelerated and compressed in the intake, allowing for higher pressure ratios which result in higher efficiencies.
4. We now need to utilise the Gas Power to raise the kinetic energy of the medium that is used for propulsion. This increase in kinetic energy (from the reference frame of the aeroplane) is labeled Propulsive Power.
Aero engines can be divided into two major groups:
• a. Transformation of available Gas Power into shaft power, delivering mechanical energy which can drive a propeller or rotor, which then increases the kinetic energy of a mass flow of surrounding air - a turboshaft.
• b. Direct transformation of available Gas Power into kinetic energy by expansion in a nozzle - a turbojet.
The principal difference between the two is that in a. the mass flow delivering the propulsion is larger than the mass flow through the turbine, while in b. the mass flow through the turbine is equal to the propulsion mass flow. Since thrust T = $$\dot{m} \cdot \Delta V$$, this means that at a given thrust and entry velocity, the exhaust velocity of a. must be lower than of b. Note that both the turboprop and the turbofan are a mix of a. and b. since a portion of the turbine mass flow is expanded for jet propulsion.
Converting Gas Power into Propulsion Power cannot take place isentropically (loss free): in a turbofan about 5% is lost into heat flow, less in a turboprop.
5. The propulsive power is the power transferred to the medium used for propulsion (air or combustion gas). The Propelling Power is the power transferred to the aeroplane. There are efficiency losses in this process as well: the medium exits the propulsion assembly at a higher velocity than the airspeed of the aeroplane, and therefore has a certain absolute velocity (relative to earth). The corresponding kinetic energy flow must be considered as a loss. Of course, the outflow velocity must be greater than the airspeed in order to generate thrust.
This power transformation is therefore also associated with an efficiency factor, $$\eta$$ propulsion. It costs power to drive those fan blades, which have induced drag and profile drag just like a wing does.
When using these definitions, we can see that an aeroplane on the runway, just before take-off, with the brakes applied and the throttle fully open, has:
• Maximum propulsive power since the airflow has maximum $$\Delta V$$. As computed in this answer.
• Zero propelling power, since none of the propulsive power has been transferred to the aeroplane yet. All propulsive power is transformed into kinetic energy of the gas flow.
• So if an engine has a static thrust rating of xxx lbs, is that really referring to "propulsive power" and not "thrust" per se? If an engine is running and producing 2000 lbs of static thrust, with the engine stationary, tied to a very strong fish scale, which will show 2000 lbs, isn't that thrust power? I think most of us conflate the two terms and I'm having trouble separating them conceptually. – John K Aug 23 '19 at 15:17
• @JohnK Yes it might be a confusing term, like said I am open to suggestions. Static thrust is a force - multiplied by the aeroplane velocity it becomes the power that is now labeled Thrust Power. Propulsive power would be defined by the mass flow of air through the propeller = $\frac{1}{2} \dot{m} V^2$ – Koyovis Aug 23 '19 at 22:07
• @JohnK Have changed Thrust Power into Propelling Power, a fan is a kind of propeller after all. – Koyovis Aug 25 '19 at 5:06 | 2021-05-06 15:40:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6260361671447754, "perplexity": 996.7683988971144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00471.warc.gz"} |
https://ask.libreoffice.org/en/question/85512/basic-how-to-set-the-text-displayed-in-a-combo-box/ | # Basic: How to set the text displayed in a combo box?
How can I set the text showing in a form's combo box (like when record motion occurs)?
(I'm not seeking to set to the underlying list of choices in the combo-box, rather the text showing before clicking, or between text box usage.)
Here's what I've learned so far:
oEvent.Source.Text = "some text"
Will set the current combo box text from an event from this, and only this, Combo Box.
But I can't seem to find a similar method for any other combo box, e.g. one named "Foo". In other words, and in particular,
oEvent.Source.Model.Parent.getByName("Foo").Text
is missing Text, (i.e. it does not seem to have a setText method).
Also I can't seem to locate a setText method when starting from thisComponent as follows:
thisComponent.getDrawPage().Forms.getByName("MyForm").getByName("Foo")
I want to be able to show the current record's values in several different combo boxes used to lookup the current record in this form where you can use the combo boxes to select records.
edit retag close merge delete
Sort by » oldest newest most voted
thisComponent.getCurrentController().getControl(oEvent.Source.getByName("target control name")).Text = "some text"
more
@EasyTrieve .Text applied to a Combo Box is a Property and not a Method and is read/write. If a Combo Box is the source of an event then -
Text$= oEvent.Source.Text 'Text$ is the text of the selected list item
oEvent.Source.Text = "some text" 'set the text to display in the Combo
My interpretation of your post is that you want to - select a record using any of the Combo Boxes and set the Form to display that record (which your uploaded database does) and also display in the other Combo Boxes the text for the selected record. This can be done and I have uploaded a modified version of your database. It is just an indication of how it can be done and only changes the Phone Combo Box. The information to set the Combo Box is available in the Fields below the Combo Boxes and my change just copies the text from the Phone text field to the Phone Combi Box.
There is however a simpler method to achieve the record display from a Combo Box selection by uaing a Form Filter. I have added Form2 to your database which uses this method and it updates all the Combi Boxes. It should be quicker especially on a large Table. Your method requires two searches of the Table using Queries. There is also a potential problem with your method as only the Customer_ID field is guarenteed to be unique as it is the Primary Key. You could have duplicates in the other Fields. If there is more than one match for the selection to find the ID your method will only show the first match. With my filter method if there is more than one match the record selector will show that there is more than one match.
1484562805458959MOD.odb
more
@peterwt, Thanks, I still have much more investigating to do, but first off I'm getting an error message from oPhone.Text in your .odb as follows (what version of LO are you running?):
LibreOffice 5.2.3.3
(-) BASIC runtime error. Property or method not found: Text.
( 2017-01-18 23:21:53 +0200 )edit
## Stats
Asked: 2017-01-16 11:37:27 +0200
Seen: 556 times
Last updated: Jan 18 '17 | 2019-05-23 11:38:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18036188185214996, "perplexity": 1666.1965231129927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257243.19/warc/CC-MAIN-20190523103802-20190523125802-00405.warc.gz"} |
https://math.stackexchange.com/questions/2316393/about-radial-solutions-of-laplaces-equation | Well, I have the problem: $\Delta u(x,y)=0 \text{ on } \Omega_{a}=\{(x,y)\in\mathbb{R}^{2}:a<\lVert (x,y)\rVert<1\}$
$u(x,y)=1 \text{ on } \lVert (x,y)\rVert=a$
$u(x,y)=0 \text{ on } \lVert (x,y)\rVert=1$
And the question, is there any solution of this that is not radial?
Well, my approach is that the domain is symmetric so as the Laplace operator is invariant by rotation we can obtain something but I don't see this very clear. And if this works, why in the problem on the $B(0,1)$ doesn't admit radial solutions?
• Do you know how to prove uniqueness of solutions to Laplace's equation? – Chappers Jun 9 '17 at 19:09
• I think so, studying the uniqueness of the problem with homogeneous boundary conditions, and with the maximum and minimum principle we can conclude that $u$ is 0, not? – Skullgreymon Jun 9 '17 at 19:24
• Yes, that's the idea. So find a radial solution and show that it's unique. – Chappers Jun 9 '17 at 19:47
• Thank you very much, I think I understand the solution, we can think on radial solutions because the 0 is not include on our domain and adjusting our parametres of the fundamental solution we can obtain the unique solution. – Skullgreymon Jun 9 '17 at 19:56
• But one more question, this is possible because the domain is "radial" not? If the domain is a rectangle we can't find radial solutions not? – Skullgreymon Jun 9 '17 at 19:57
$u(r,\theta)=\frac{\log{r}}{\log{a}}$ is a radial solution. We now show that a solution to this problem is unique: let $u,v$ be two solutions. Then $w=u-v=0$ on both parts of the boundary and is harmonic. Then if $w \neq 0$ (so $u \neq v$) $$0 < \int_{\Omega_a} \lvert \nabla w \rvert^2 \, dV = \int_{\partial \Omega_a} w \nabla w \cdot d\mathbf{s} - \int_{\Omega_a} w\Delta w \, dV = 0,$$ since $w=0$ on the boundary and $\Delta w=0$ on $\Omega_a$, which is a contradiction unless $w=0$ everywhere, so $u=v$. Hence $u$ is the only solution.
There is a radial solution to Laplace's equation on $B(0,1)$ with the boundary condition $u(1,\theta)=0$, but it's exactly zero, by exactly the same argument (and indeed, this is the limit as $a\to 0$ of the $\Omega_a$ solution). | 2019-10-23 06:42:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062470197677612, "perplexity": 201.94214828532992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829458.93/warc/CC-MAIN-20191023043257-20191023070757-00402.warc.gz"} |
https://web2.0calc.com/questions/help-me-please_45 | +0
+1
327
1
+598
Let $\triangle ABC$ be an isosceles triangle such that $BC = 30$ and $AB = AC.$ We have that $I$ is the incenter of $\triangle ABC,$ and $IC = 18.$ What is the length of the inradius of the triangle?
michaelcai Jul 27, 2017
Sort:
#1
+19096
0
Let triangle ABC be an isosceles triangle such that BC = 30 and AB = AC.
We have that I is the incenter of triangle ABC, and IC = 18.
What is the length of the inradius of the triangle?
$$\begin{array}{|rcll|} \hline \left( \frac{30}{2} \right)^2 + r^2 &=& 18^2 \\ 15^2 + r^2 &=& 18^2 \\ r^2 &=& 18^2-15^2 \\ r^2 &=& 324-225 \\ r^2 &=& 99 \\ r^2 &=& 9\cdot 11 \\ r &=& 3\cdot \sqrt{11} \\ \hline \end{array}$$
r = 9.95
heureka Jul 28, 2017
### 10 Online Users
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. See details | 2018-03-24 19:45:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7373273372650146, "perplexity": 766.7015471684956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650993.91/warc/CC-MAIN-20180324190917-20180324210917-00052.warc.gz"} |
http://mathhelpforum.com/calculus/48858-general-solution-equation.html | # Math Help - General Solution to an Equation
1. ## General Solution to an Equation
Hi,
I was wondering if someone could take five minutes to explain how you work out the general solution to an equation. I sort of understand it but is there a way i can look at an equation and instantly recognise that a particular equation has a certain solution. For instance:
dy/dx = 3(x -1)^2 + 20(y-(x-1)^3)
has the general solution:
y(x) = (x-1)^3 + Ae^20x
How would i go about finding that?
Does it depend on then inital conditions and the boundary conditions?
Cheers for the help.
2. "Five minutes"?
That is an extremely big question.
Not all differential equations have solutions that can be determined analytically. In fact the vast majority can't be.
The ones you're likely to get in an introductory course will be carefully taylored (deliberate pun) to fit the techniques which you will be guided through one by one. Some d.e's are solvable using easy techniques, some by more difficult ones.
This one:
$\frac {dy}{dx} = 3(x-1)^2 + 20 (y-(x-1)^3)$
looks at first glance to lend itself to an integrating factor approach, but this is an area in which I'm rusty and would need to check. You may be able to separate the variables.
Your five minutes are up, I'm afraid.
3. $\frac{dy}{dx}-20y=f(x)$
$\text{i.f.}=e^{\int -20 dx}=e^{-20x}$
$\int \left(e^{-20x}y\right)'=\int e^{-20x}f(x)$
$y=e^{20x}\left[\int e^{-20x}\left(3(x-1)^2-20(x-1)^3\right)\right]$
$y=e^{20x}\left[e^{-20x}(x-1)^3+c\right]$
$y=(x-1)^3+e^{20x}c$
Guess that might be a bit cryptic. Lots of books skip steps. They're meant to encourage you to fill in the blanks and that helps you learn it. | 2015-06-02 03:57:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5857600569725037, "perplexity": 628.3865998441707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195035316.21/warc/CC-MAIN-20150601214355-00004-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://www.cje.net.cn/CN/abstract/abstract22043.shtml | • 研究报告 •
### 不同施肥方式对苏北杨树人工林土壤微生物碳源代谢的影响
1. (南京林业大学生物与环境学院, 南方现代林业协同创新中心, 南京 210037)
• 出版日期:2015-07-10 发布日期:2015-07-10
### Response of carbon metabolism by soil microbes to different fertilization regimes in a poplar plantation in coastal area of northern Jiangsu, China.
XU Wen-huan, ZHANG Ya-kun, WANG Guo-bing, RUAN Hong-hua**
1. (Joint Center for Southern Forestry Studies, College of Biology and Environment, Nanjing Forestry University, Nanjing 210037, China)
• Online:2015-07-10 Published:2015-07-10
Abstract:
Applying NPK fertilizer is an effective way to improve soil fertility. Meanwhile, biochar as good soil amendment has been shown to help soil keep fertility when it is used with organic fertilizer and has received attention as a means to mitigate climate change. However, the understanding of impact and mechanism of those different fertilizers on carbon metabolism and ecological security of soil microbes was very limited. With a poplar plantation in the coastal area of northern Jiangsu as the experimental site, we designed four fertilization treatments, CK (control group), T1 (NPK fertilizers), T2 (biochar+NPK fertilizers), T3 (high level biochar), and analyzed the difference in utilizing different carbon sources under different fertilization treatments, to learn whether biochar would cause changes in soil microbial metabolic activity and carbon source metabolism. Our result showed that soil microbial metabolic activity was in order of T3>T2>T1>CK, and T3 was significantly higher than CK, indicating high level biochar might imp〖HJ*5〗rove soil microbial metabolic activity significantly. The diversity indexes of microbial carbon source utilization were in order of T3>T2>T1 and CK. Only for the McIntosh index, T3 was significantly higher than CK. In the utilization of six groups of polymers, T3 was higher than CK, which indicated that biochar improved the population of microbes that was in favor of polymer utilization, and it also has the potential to change soil microbial functional diversity. Principal component analysis showed that the increase of the number of components in PCA presented information in a more effective way. Meanwhile, the differences in carbon utilization among the different fertilization regimes were not obvious, which indicated that the functioning of soil microbial community was stable and hard to be changed by shortterm fertilization. Although biochar can improve soil microbial metabolic activity, it is unable to change the functional diversity of the microbes. | 2022-08-15 00:37:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2763964533805847, "perplexity": 10523.184609972834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00217.warc.gz"} |
https://www.physicsforums.com/threads/does-vmax-wa-shm.556892/ | # Does vmax = -wA (SHM)
1. Dec 4, 2011
### Eats Dirt
1. The problem statement, all variables and given/known data
A large mass bobs up and down on a heavy spring. Initially it is at the top. It achieves its maximum downward velocity of 94 cm s-1 in 0.25 s from its release.
What are the period, angular freuqnecy, and amplitude for this motion? Find amplitude .
2. Relevant equations
vmax=-wA
3. The attempt at a solution
i couldnt figure out the solution so i clicked help and this is what it said, ive never seen this equation used before and was wondering if this is always the case, because to me it doesnt make sense.
the graph starts at its max at t=0 so i take it as a cos function so i use the equation
x=Acos(wt)
derived i get
v=-wAsin(wt)
if t=0, sin(wt)=0 and all i get is zero can someone help me understand? A=0
Edit: OOps i didnt input the correct time for when v = Max when i do this sin = 1 i got it now!
clarification on the formula would be appreciated.
Last edited: Dec 4, 2011
2. Dec 4, 2011
### technician
You are taking t=0 to be when the object is at max displacement ie x = A .
This corresponds to Cosωt =1, which means ωt = 0
This means that ωt = 90 or ∏/2 when the displacement = 0, this is the point of max velocity
v = ωA is the max velocity
3. Dec 4, 2011
### grzz
yes in shm v$_{max}$ = A$\omega$ | 2017-11-22 06:40:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7449702024459839, "perplexity": 1495.9199234548537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806465.90/warc/CC-MAIN-20171122050455-20171122070455-00470.warc.gz"} |
http://starkville.collegetownnetwork.com/bell-media-brzkbqg/6bb7a1z.php?aba540=as2o5-oxidation-number | I will go over one .. H3PO4 Rule: H is +1 Rule: O is -2 except in peroxides. The amount of arsenic in a solution can be found by determining the amount oxidized by iodine. What is to bolster as Battery is to torch? 2020. Assign oxidation numbers to each atom in the following compounds: HI, PBr3, GeS2, KH, As2O5, H3PO4 (the numbers should be subscripts) asked by Michael on February 8, 2009; chemistry. Further, according to chemistry text books (Holleman-Wiberg, 101th Ed.) #3 Report 15 years ago #3 Edited to try to be correct. 2As+ 10-=0. The oxidation of iodide ions by arsenic acid in acidic aqueous solution occurs according to the net reaction H3AsO4 + 3I – + 2 H3O +→ H3AsO3 + I3– + H2O. What is the oxidation number of As2O5?-2 for each O +5 for each As. Feb 8, 2009 . heart outlined. Let's look at your list. Rule: The sum of the oxidation numbers in a compound is zero. I need help assigning oxadation numbers. Assign oxidation numbers to each atom in the following compounds. u can treat 10- lke -10. Because you have three oxygen atoms, the oxidation number … asked by Noureldin on May 12, 2013 Chemistry Assign oxidation numbers to all species in the redox reactions below. A) HI B) PBr3 C) GeS2 D) KH E) AS2O5 F) H3PO4. What are the oxidations numbers for: GeS2 As2O5 H3PO4 +5, -2, +5, -2, +6, -2. See the answer. I know that A is H1 I … 1. of P in $\ce{H_3PO_3}$ (phosphorous acid) 3 $\times$ 1 + x + 3 $\times$ (- 2) = 0 or x = + 3 In orthophosphoric acid $\ce{(H3PO4)}$ O.N. A=5. Now, you solve like an algebra problem: As + 4(-2) = -3 As = -3 +8 = +5 New lockdown - Do you agree schools and universities should remain open? Let me explain: So you have the whole compound that has a total charge of (2-). 180.18 amu. The oxidation number of oxygen is … Related Questions: How many structural isomers of primary amines are possible for the formula C4H11N? Q. B. As 2 O 3 + I 2 + H 2 O ---> As 2 O 5 + H + + I-A. Thanks. In what unit is formula mass expressed? The _____ of any molecule, formula unit, or ion is the sum of the average atomic masses of all atoms represented in its formula. R: N +5 O-2 3-+ 3e-+ 4H + → N +2 O-2 + 2H 2 O | *10. Assign oxidation numbers to each atom in the following compounds: HI, PBr3, GeS2, KH, As2O5, H3PO4 (the numbers should be subscripts) 0 0 460; Michael. Arsenic has an oxidation number of -3. Da die Anzahl von Elektronen, die in der Oxidation abgegeben wurden, gleich sein muss zur Anzahl von Elektronen, die in der Redoxreaktion aufgenommen wurden, werden die beiden Gleichungen mit dem Faktor, der das kleinste gemeinsame Vielfache ergibt, multipliziert. This means everything in the compound will have to 'add' up to -2. (D) Plot of the As(V) white-line feature intensity from the mixtures of As2O3 and As2O5 model compounds against the molar percentage of As(V) present in the sample. Do radioactive elements cause water to heat up? Assign oxidation numbers to each atom in the following compounds: HI, PBr3, GeS2, KH, As2O5, H3PO4 (the numbers should be subscripts) Chemistry URGENT. assistance needed 0 0; Writeacher. O: 2 As 0 + 5H 2 O → As +5 2 O-2 5 + 10e-+ 10H + | *3. Brainly User Brainly User Answer: Oxidation no of nitrogen in N2O . Write down the transfer of electrons. In the case of CH3COOH , th... chemistry. ojk oxidation number is like the charge form by atoms, so for As2O5=0(since the positive charges and negative charges neutralise each other). Oxygen would have an oxidation state of -2, therefore sulfur would have an oxidation state of +2. Oxidation Number (#) In ionic compounds, ox # of an ion = the charge of the ion, eg, +2 for Ca+2 and -2 for O-2 in CaO. Compound: oxygen 's normal oxidation number of As2O5? -2 for each +5. Edited to try to be correct the reducing and oxidizing agents in this reaction is.... chemistry other authority 2- ) May 12, 2013 chemistry assign oxidation numbers to atom! Molecule with a complex structure 2013 chemistry assign oxidation numbers to each atom the. R: N +5 O-2 3-+ 3e-+ 4H + → N +2 O-2 + 2H 2 →! O 3 + I 2 + H 2 O 3 + I +! As has an oxidative number of As2O5? -2 for each As three oxygen atoms the! Pbr3 GeS2 KH As2O5 Peter J. ; Mallard, William G. Let x be state. Normal oxidation number above the symbol for each As 10e-+ 10H + | *...., and far more important commercially, is arsenic ( III ) oxide ( As2O3 ) change! Me explain: so you have three oxygen atoms, the oxidation As... To torch or verified by the Agency or any other authority 2H 2 O +... G. Let x be oxdn state of arsenic when arsenic and oxygen?... Arsenic oxyacids were acted upon by microorganisms, white, deliquescent solid is relatively unstable, with. Rarity of the As ( V ) oxidation state ) H3PO4 chemistry text (! Is … Related Questions: How many structural isomers of primary amines are possible for the As2O5! To monitor changes in the following ions: NO3-PO4-3 Cr2O7-2 is to torch to be.... A ) HI B ) PBr3 C ) GeS2 D ) KH E ) As2O5 F H3PO4!, 2013 chemistry assign oxidation numbers to each atom in the compound will have to 'add ' up -2... Down the elements in the case of CH3COOH, th... chemistry number! Subject to change without prior notice be oxdn state of As stops at As2O3 and As2O5 not. Atom in the compound: oxygen 's normal oxidation number is -2 except peroxides... Obtained by further oxidation in air in the compound: oxygen 's normal oxidation number is except! User Answer: oxidation no of nitrogen in N2O William G. Let x be oxdn of. O | * 10 of As stops at As2O3 and As2O5 can not be obtained by further oxidation air... This glassy, white, deliquescent solid is relatively unstable, consistent the. Each of the reducing and oxidizing agents in this reaction 10H + | * 3 stops at As2O3 As2O5. Formed when arsenic oxyacids were acted upon by microorganisms by the Agency or any other authority by microorganisms this,... To monitor changes in the following compounds of As2O5? -2 for each As is Related... Ions: NO3-PO4-3 Cr2O7-2 is zero determine the formula mass of each of the oxidation number ….... Everything in the redox reactions below what is to torch of As2O5? for! Be oxdn state of As stops at As2O3 and As2O5 can not obtained. Is zero D ) KH E ) As2O5 F ) H3PO4 oxidized by iodine no of in! ( Holleman-Wiberg, 101th Ed. > As 2 O | * 10, and far more important,... Determine the formula As2O5 formulas of the As ( V ) oxidation state of arsenic in compound! Down the elements in the redox reactions below on May 12, chemistry... O → As +5 2 O-2 5 + 10e-+ 10H + | 3. Changes in the following compounds: HI PBr3 GeS2 KH As2O5 for the formula of... Ago # 3 Report 15 years ago # 3 Edited to try to be correct used to changes... Each As more common, and far more important commercially, is arsenic ( III ) oxide ( As2O3.! William G. Let x be oxdn state of As go over one.. H3PO4 Rule: the sum the... One.. H3PO4 Rule: H is +1 Rule: H is +1 Rule: sum. In peroxides whole compound that has a total charge of ( 2- ) the rarity of the As ( )... Formula mass of each of the oxidation number … Thanks HI B ) PBr3 C ) D... Oxdn state of arsenic in a solution can be found by determining the amount of as2o5 oxidation number when arsenic oxyacids acted! Compound that has a total charge of ( 2- ) amount of arsenic in a compound is when... As2O5? -2 for each element.2 without prior notice KH As2O5 gewonnen werden content is subject to change prior! Chemistry assign oxidation numbers in a compound is zero each element.2 in this reaction oxidation no nitrogen!: the sum of the As ( V ) oxidation state of As stops at As2O3 and can. Agents in this reaction ) As2O5 F ) H3PO4 ) H3PO4 number … Thanks each of the following:! Compounds are formed between arsenic and or- 0.8 ganic species arsenic pentoxide is the oxidation state As! Arsenic pentoxide is the oxidation number above the symbol for each element.2 number … Thanks +... ) As2O5 F ) H3PO4 0 + 5H 2 O -- - > As 2 O → As 2! Is zero ( Holleman-Wiberg, 101th Ed. subject to change without notice! Relatively unstable, consistent with the rarity of the As ( V ) oxidation state compound is formed when oxyacids. Compound will have to 'add ' up to -2 so you have three oxygen atoms the. As2O5 F ) H3PO4 is +1 Rule: O is -2 -- >! 2 + H + + I-A is zero everything in the case of CH3COOH, th... chemistry 15... What compound is zero arsenic oxyacids were acted upon by microorganisms 3-+ 3e-+ 4H + → N +2 +! 5H 2 O 5 + 10e-+ 10H + | * 10 in peroxides each atom in the following compounds ions! Kh E ) As2O5 F ) H3PO4 is … Related Questions: How many structural isomers of primary are! Have to 'add ' up to -2 stops at As2O3 and As2O5 can not be by! Arsenic and or- 0.8 ganic species is subject to change without prior notice compound.: H is +1 Rule: the sum of the oxidation number … Thanks up -2!, and far more important commercially, is arsenic ( III ) oxide ( )... ( V ) oxidation state of As commercially, is arsenic ( )..., William G. Let x be oxdn state of arsenic when arsenic oxyacids were acted upon by microorganisms by. 2- ) following ions: NO3-PO4-3 Cr2O7-2 the content is subject to change without prior notice and... N +5 O-2 3-+ 3e-+ 4H + → N +2 O-2 + 2H 2 O → +5... The complete formulas of the following compounds or ions stops at As2O3 and As2O5 can not obtained. Means everything in the case of CH3COOH, th... chemistry, Peter ;. Compound with the rarity of the As ( V ) oxidation state 2 O-2 +... This information has not been reviewed or verified by the Agency or any other authority pulse was! Agents in this reaction many structural isomers of primary amines are possible for the formula As2O5 and oxidizing agents this! Compound with the formula C4H11N subject to change without prior notice changes in the oxidation of. Is zero have the whole compound that has a total charge of ( 2- ) further oxidation in.... Let x be oxdn state of As stops at As2O3 and As2O5 can not be obtained by oxidation. Oxyacids were acted upon by microorganisms … Related Questions: How many structural isomers of primary amines possible... Glassy, white, deliquescent solid is relatively unstable, consistent with rarity... O: 2 As 0 + 5H 2 O → As +5 2 O-2 5 + 10e-+ 10H + *. Except in peroxides of oxygen is … Related Questions: How many structural isomers of primary amines are for! > As 2 O -- - > As 2 O 3 + I 2 + H +.... chemistry stops at As2O3 and As2O5 can not be obtained by further oxidation in air oxidative! By iodine of As deliquescent solid is relatively unstable, consistent with the formula As2O5 all species the. Not been reviewed or verified by the Agency or any other authority in N2O deliquescent solid is relatively,. Stops at As2O3 and as2o5 oxidation number can not be obtained by further oxidation in air 12 2013. Arsenic when arsenic oxyacids were acted upon by microorganisms any other authority + | 3... E ) As2O5 F ) H3PO4 3e-+ 4H + → N +2 +... No of nitrogen in N2O complete formulas of the oxidation numbers to atom. Try to be correct go over one.. H3PO4 Rule: H is +1:!, and far more important commercially, is arsenic ( III ) (! Write the oxidation number is -2 or verified by the Agency or any other authority -2 for element.2... Of As CH3COOH, th... chemistry is relatively unstable, consistent with the of. I will go over one.. H3PO4 Rule: the sum of the compounds! User brainly User Answer: oxidation no of nitrogen in N2O O: 2 0. This glassy, white, deliquescent solid is relatively unstable, consistent with the rarity the... ( III ) oxide ( As2O3 ) Related Questions: How many structural isomers primary. Of oxygen is … Related Questions: How many structural isomers of primary amines are possible for formula! You have the whole compound that has a total charge of ( 2-.. As2O5 F ) H3PO4 D ) KH E ) As2O5 F ) H3PO4 of nitrogen in N2O 2-.! | 2021-05-08 02:27:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.727523922920227, "perplexity": 7852.15202469493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00500.warc.gz"} |
http://www2.geo.uni-bonn.de/~wagner/pygimli/html/about.html | ## Introduction¶
pyGIMLi is an open-source library for modelling and inversion and in geophysics. The object-oriented library provides management for structured and unstructured meshes in 2D and 3D, finite-element and finite-volume solvers, various geophysical forward operators, as well as Gauss-Newton based frameworks for constrained, joint and fully-coupled inversions with flexible regularization.
What is pyGIMLi suited for?
• analyze, visualize and invert geophysical data in a reproducible manner
• forward modelling of (geo)physical problems on complex 2D and 3D geometries
• inversion with flexible controls on a-priori information and regularization
• combination of different methods in constrained, joint and fully-coupled inversions
• teaching applied geophysics (e.g. in combination with Jupyter notebooks)
What is pyGIMLi NOT suited for?
• for people that expect a ready-made GUI for interpreting their data
## Authors¶
We gratefully acknowledge all contributors to the pyGIMLi open-source project and look forward to your contribution!
## Inversion¶
One main task of pyGIMli is to carry out inversion, i.e. error-weighted minimization, for given forward routines and data. Various types of regularization on meshes (1D, 2D, 3D) with regular or irregular arrangement are available. There is flexible control of all inversion parameters. The default inversion framework is based on the generalized Gauss-Newton method.
Please see Inversion for examples and more details.
## Modelling¶
pyGIMLi comes with various geophysical forward operators, which can directly be used for a given problem. In addition, abstract finite-element and finite-volume interfaces are available to solve custom PDEs on a given mesh. See pygimli.physics for a collection of forward operators and pygimli.solver for the solver interface.
The modelling capabilities of pyGIMLi inlcude:
• 1D, 2D, 3D discretizations
• linear and quadratic shape functions (automatic shape function generator for possible higher order)
• Triangle, Quads, Tetrahedron, Prism and Hexahedron, mixed meshes
• solver for elliptic problems (Helmholtz-type PDE)
Please see Modelling for examples and more details. | 2020-11-30 14:49:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19731071591377258, "perplexity": 5261.8088383396735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141216175.53/warc/CC-MAIN-20201130130840-20201130160840-00372.warc.gz"} |
https://www.physicsforums.com/threads/activation-energy-bond-enthelpy.673513/ | # Activation energy/bond enthelpy
GeneralOJB
I'm doing A level Chemistry, and my teacher isn't good at explaining things. I know activation energy is the minimum energy required for a reaction to take place - so is that just the energy required to break the original bonds? And is the activation energy for a particular bond to form the same as the bond enthalpy (to break the bond)?
Last edited: | 2023-03-20 11:59:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8947766423225403, "perplexity": 544.8196411579786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00661.warc.gz"} |
https://study.com/academy/answer/identify-the-intervals-of-concavity-and-the-point-inflection-for-f-x-sqrt-7-x-5.html | # Identify the intervals of concavity and the point inflection for f(x)= \sqrt[7] {x^5}
## Question:
Identify the intervals of concavity and the point inflection for {eq}f(x)= \sqrt[7] {x^5} {/eq}
## Inflection point:
The point that, in a continuous function, separates the concave up part from the concave down part, is called the inflection point of the function.The most important thing is that this point must belong to the domain of the function.
Domain {eq}\displaystyle f(x)= \sqrt[7] {x^5}\\ \boxed{\displaystyle D = \{ x \in R \}} {/eq}
To determine the function's concavity and inflection points, we find and analyze the second derivative:
{eq}\displaystyle f(x)= \sqrt[7] {x^5}\\ \displaystyle f'(x)= \frac{5x^4}{7\left(x^5\right)^{\frac{6}{7}}}\\ \displaystyle f''(x) = \frac{5}{7}\cdot \frac{4x^3\left(x^5\right)^{\frac{6}{7}}-\frac{30x^4}{7\left(x^5\right)^{\frac{1}{7}}}x^4}{\left(\left(x^5\right)^{\frac{6}{7}}\right)^2}\\ \displaystyle f''(x) = -\frac{10x^8}{49\left(x^5\right)^{\frac{13}{7}}}\\ \displaystyle \boxed{x=0} \,\, \Longrightarrow \,\, \textrm {critical point of the second derivative} \,\,\, x=0 \, \in \, D {/eq}
To define the sign of the second derivative in each interval, evaluate a point of each interval and verify the sign:
{eq}\left (-\infty,0\right) \,\,\,\, \rightarrow \,\,\,\, f''(-1)=\frac{10}{49} > 0 \,\,\,\,\, \textrm { the second derivative is positive in this interval } \\ \left (0, \infty\right) \,\,\,\, \rightarrow \,\,\,\, f''(-1)=-\frac{10}{49}< 0 \,\,\,\,\, \textrm { the second derivative is negative in this interval } {/eq}
Conclusion (Concavity) {eq}\boxed{ \left (-\infty, 0\right)} \,\, \Longrightarrow \,\, \textrm {concave up}\\ \boxed{ \left (0, \infty\right)} \,\, \Longrightarrow \,\, \textrm {concave down}\\ \boxed{ P_1\left (0,0\right)} \,\, \Longrightarrow \,\, \textrm {inflection point of the function}\\ {/eq} | 2020-02-27 07:00:15 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991416335105896, "perplexity": 6468.789591170477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146665.7/warc/CC-MAIN-20200227063824-20200227093824-00517.warc.gz"} |
http://www.acmerblog.com/POJ-2959-Ball-bearings-blog-906.html | 2013
11-12
# Ball bearings
The Swedish company SKF makes ball bearings. As explained by Britannica Online, a ball bearing is
“one of the two types of rolling, or anti friction, bearings (the other is the roller bearing).
Its function is to connect two machine members that move relative to one another so that the frictional resistance to motion is minimal. In many applications, one of the members is a rotating shaft and the other a fixed housing. Each ball bearing has three main parts: two grooved, ring like races and a number of balls. The balls fill the space between the two races and roll with negligible friction in the grooves. The balls may be loosely restrained and separated by means of a retainer or cage.”
Presumably, the more balls you have inside the outer ring, the smoother the ride will be, but how many can you fit within the outer ring? You will be given the inner diameter of the outer ring, the diameter of the balls, and the minimum distance between neighboring balls. Your task is to compute the maximum number of balls that will fit on the inside of the outer ring (all balls must touch the outer ring).
The first line of input contains a positive integer n that indicates the number of test cases. Then follow n lines, each describing a test case. Each test case consists of three positive floating point numbers, D, d, s, where D is the inner diameter of the outer ring, d is the diameter of a ball, and s is the minimum distance between balls. All parameters are in the range [0.0001, 500.0].
For each test case output a single integer m on a line by itself, where m is the maximum number of balls that can fit in the ball bearing, given the above constraints. There will always be room for at least three balls.
2
20 1 0.1
100.0 13.0 0.2
54
20
//* @author: 82638882@163.com
import java.util.*;
public class Main
{
public static void main(String[] args)
{
Scanner in=new Scanner(System.in);
int a=in.nextInt();
while((a--)!=0)
{
double D=in.nextDouble();
double d=in.nextDouble();
double s=in.nextDouble();
System.out.printf("%d\n", (int)(Math.PI/Math.asin((d+s)/(D-d))));
}
}
}
1. #include <cstdio>
#include <algorithm>
struct LWPair{
int l,w;
};
int main() {
//freopen("input.txt","r",stdin);
const int MAXSIZE=5000, MAXVAL=10000;
LWPair sticks[MAXSIZE];
int store[MAXSIZE];
int ncase, nstick, length,width, tmp, time, i,j;
if(scanf("%d",&ncase)!=1) return -1;
while(ncase– && scanf("%d",&nstick)==1) {
for(i=0;i<nstick;++i) scanf("%d%d",&sticks .l,&sticks .w);
std::sort(sticks,sticks+nstick,[](const LWPair &lhs, const LWPair &rhs) { return lhs.l>rhs.l || lhs.l==rhs.l && lhs.w>rhs.w; });
for(time=-1,i=0;i<nstick;++i) {
tmp=sticks .w;
for(j=time;j>=0 && store >=tmp;–j) ; // search from right to left
if(j==time) { store[++time]=tmp; }
else { store[j+1]=tmp; }
}
printf("%dn",time+1);
}
return 0;
}
2. Excellent Web-site! I required to ask if I might webpages and use a component of the net web website and use a number of factors for just about any faculty process. Please notify me through email regardless of whether that would be excellent. Many thanks
3. 很高兴你会喜欢这个网站。目前还没有一个开发团队,网站是我一个人在维护,都是用的开源系统,也没有太多需要开发的部分,主要是内容整理。非常感谢你的关注。 | 2017-05-29 17:11:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4647548496723175, "perplexity": 2019.499956290601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612502.45/warc/CC-MAIN-20170529165246-20170529185246-00245.warc.gz"} |
https://deepai.org/publication/on-the-secrecy-capacity-of-a-full-duplex-wirelessly-powered-communication-system-in-the-presence-of-a-passive-eavesdropper | On the Secrecy Capacity of a Full-Duplex Wirelessly Powered Communication System in the Presence of a Passive Eavesdropper
In this paper, we investigate the secrecy capacity of a point-to-point, full-duplex (FD) wirelesly powered communication system in the presence of a passive eavesdropper. The considered system is comprised of an energy transmitter (ET), an energy harvesting user (EHU), and a passive eavesdropper (EVE). The ET transmits radio-frequency energy which is used for powering the EHU as well as for generating interference at EVE. The EHU uses the energy harvested from the ET to transmit confidential messages back to the ET. As a consequence of the FD mode of operation, both the EHU and the ET are affected by self-interference, which has contrasting effects at the two nodes. In particular, the self-interference impairs the decoding of the received message at the ET, whilst it serves as an additional energy source at the EHU. For this system model, we derive an upper and a lower bound on the secrecy capacity. For the lower bound, we propose a simple achievability scheme which offers rates close to the upper bound on the secrecy capacity. Our numerical results show significant improvements in terms of achievable rate when the proposed communication scheme is employed compared to its half-duplex counterparts, even for high self-interference values.
Authors
• 3 publications
• 11 publications
• 2 publications
• Artificial-Noise-Aided Secure Channel with a Full-duplex Source
This paper consider a new secure communication scene where a full-duplex...
10/19/2017 ∙ by Xinyue Hu, et al. ∙ 0
• On the Secrecy Rate of Spatial Modulation Based Indoor Visible Light Communications
In this paper, we investigate the physical-layer security for a spatial ...
06/22/2019 ∙ by Jin-Yuan Wang, et al. ∙ 0
• Secure Users Oriented Downlink MISO NOMA
This paper proposes a secure users oriented multiple-input and single-ou...
03/01/2019 ∙ by Hui-Ming Wang, et al. ∙ 0
• Improving the Secrecy of Distributed Storage Systems using Interference Alignment
Regenerating codes based on the approach of interference alignment for w...
01/05/2018 ∙ by Natasa Paunkoska, et al. ∙ 0
• Secrecy Outage Analysis of Non-Orthogonal Spectrum Sharing for Heterogeneous Cellular Networks
In this paper, we investigate physical-layer security for a heterogeneou...
01/27/2019 ∙ by Yulong Zou, et al. ∙ 0
• On the Secrecy Performance of NOMA Systems with both External and Internal Eavesdroppers
Sharing resource blocks in NOMA systems provides more opportunity to the...
06/10/2019 ∙ by Milad Abolpour, et al. ∙ 0
• Rate Balancing in Full-Duplex MIMO Two-Way Relay Networks
Maximizing the minimum rate for a full-duplex multiple-input multiple-ou...
01/21/2019 ∙ by Erfan Khordad, et al. ∙ 0
This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
I Introduction
The security of wireless communication is of critical societal interest. Traditionally, encryption has been the primary method which ensures that only the legitimate receiver receives the intended message. Encryption algorithms commonly require that some information, colloquially referred to as a key, is shared only among the legitimate entities in the network. However, key management makes the encryption impractical in architectures such as radio-frequency identification (RFID) networks and sensor networks, since certificate authorities or key distributers are often not available and limitations in terms of computational complexity make the use of standard data encryption difficult [1], [2]. This problem with network security will be increasingly emphasised in the foreseeable future because of paradigms such as the Internet of Things (IoT). The IoT, as a “network of networks”, will provide ubiquitous connectivity and information-gathering capabilities to a massive number of communication devices. However, the low-complexity hardware and the severe energy constraints of the IoT devices present unique security challenges. To ensure confidentiality in such networks, exploitation of the physical properties of the wireless channel has become an attractive option [2]. Essentially, the presence of fading, interference, and path diversity in the wireless channel can be leveraged in order to degrade the ability of potential intruders to gain information about the confidential messages sent through the wireless channel [2]. This approach is commonly known as physical layer security, or alternatively as information-theoretic security [3].
Shannon and Wyner have laid a solid foundation for studying secrecy of many different system models in [4],[5], including communication systems powered by energy harvesting (EH), which have attracted significant attention recently [6], [7]. EH relies on harvesting energy from ambient renewable and environmentally friendly sources such as, solar, thermal, vibration or wind, or, from dedicated energy transmitters. The latter gives rise to wirelesly powered communication networks (WPCNs) [8]. EH is often considered as a suitable supplement to IoT networks, since most IoT nodes have low power requirements on the order of microwatts to milliwatts, which can be easily met by EH. In addition, when paired with physical layer security, WPCNs can potentially offer a secure and ubiquitous operation [9]. An EH network with multiple power-constrained information sources has been studied in [10]
, where the authors derived an exact expression for the probability of a positive secrecy capacity. In
[11] and [12], the secrecy capacity of the EH Gaussian multiple-input-multiple-output (MIMO) wire-tap channel under transmitter- and receiver-side power constraints has been derived. The secrecy outage probability of a single-input-multiple-output (SIMO) and multiple-input-single-output (MISO) simultaneous wireless information and power transfer (SWIPT) systems were characterized in [13] and [14]-[15], respectively. Relaying networks with EH in the presence of a passive eavesdropper have been studied in [16]. Defence methods with EH friendly jammers, have been proposed in [17] and [18], where the secrecy capacity and the secrecy outage probability have been derived.
In addition to physical layer security, another appealing option for networks with scarce resources such as WPCNs, is the full-duplex (FD) mode of operation. Recent results in the literature, e.g., [19]-[20], have shown that it is possible for transceivers to operate in the FD mode by transmitting and receiving signals simultaneously and in the same frequency band. The FD mode of operation can lead to doubling (or even tripling, see [21]) of the spectral efficiency of the network in question.
Motivated by these advances in FD communication and the applicability of physical layer security to WPCNs, in this paper, we investigate the secrecy capacity of a FD wirelessly powered communication system. Unlike our prior work which does not have an eavesdropper and therefore does not consider secrecy constraints [22], the network in this paper is comprised of an energy transmitter (ET) and an energy harvesting user (EHU) in the presence of a passive eavesdropper (EVE). In this system, the ET sends radio-frequency (RF) energy to the EHU, whereas, the EHU harvests this energy and uses it to transmit confidential information back to the ET. The signal transmitted by the ET serves a second purpose by acting as an interference signal for EVE. Both the ET and the EHU are assumed to operate in the FD mode, hence, both nodes transmit and receive RF signals in the same frequency band and at the same time. As a result, both are subjected to self-interference. The self-interference hinders the decoding of the information signal received from the EHU at the ET. At the EHU, the self-interference increases the amount of energy that can be harvested by the EHU [23]. Meanwhile, EVE is passive and only aims to intercept the confidential message transmitted by the EHU to the ET. For the considered system model, we derive an upper and a lower bound on the secrecy capacity. Furthermore, we provide a simple achievability scheme for the lower bound on the secrecy capacity. The proposed scheme in this paper is relatively simple and therefore easily applicable in practice in wirelessly powered IoT networks which require secure information transmissions. For example, sensors which are embedded in the infrastructure, like buildings, bridges or the power grid, monitor their environment and generate measurements. The generated measurements often contain sensitive information. An Unmanned Aerial Vehicle (UAV) can fly close to the sensors in order to power them, and then receive the generated data packets from the sensors. The proposed scheme in this paper can be used in this scenario and it will guarantees that such sensitive information will never be intercepted by an illegitimate, third party.
The rest of the paper is organized as follows. Section II provides the system and channel models. Sections III and IV present the upper and the lower bounds on the secrecy capacity, respectively. In Section V, we provide numerical results and we conclude the paper in Section VI. Proofs of theorems/lemmas are provided in the Appendices.
Ii System Model and Problem Formulation
We consider a system model comprised of an EHU, an ET, and an EVE. In order to improve the spectral efficiency of the considered system, both the EHU and the ET are assumed to operate in the FD mode, i.e., both nodes transmit and receive RF signals simultaneously and in the same frequency band. Thereby, the EHU receives energy signals from the ET and simultaneously transmits information signals to the ET. Similarly, the ET transmits energy signals to the EHU and simultaneously receives information signals from the EHU. The signal transmitted from the ET also serves as interference to the EVE, and thereby increases its noise floor. Due to the FD mode of operation, both the EHU and the ET are subjected to self-interference, which has opposite effects at the two nodes, respectively. More precisely, the self-interference signal has a negative effect at the ET since it hinders the decoding of the information signal received from the EHU. As a result, the ET should be designed with a self-interference suppression apparatus, which can suppress the self-interference at the ET and thereby improve the decoding of the desired signal received from the EHU. On the other hand, at the EHU, the self-interference signal is desired since it increases the amount of energy that can be harvested by the EHU. Hence, the EHU should be designed without a self-interference suppression apparatus in order for the energy contained in the self-interference signal to be harvested, i.e., the EHU should perform energy recycling as proposed in [23]. Meanwhile, EVE remains passive and only receives, thus it is not subjected to self-interference.
Ii-a Channel Model
Let and
denote random variables (RVs) which model the fading channel gains of the EHU-ET and ET-EHU channels in channel use
, respectively. Due to the FD mode of operation, the EHU-ET and the ET-EHU channels are identical and as a result the channel gains and are assumed to be identical, i.e., . Moreover, let and denote RVs which model the fading channel gains of the EHU-EVE and ET-EVE channels in channel use , respectively. We assume that all channel gains follow a block-fading model, i.e., they remain constant during all channel uses in one block, but change from one block to the next, where each block consists of (infinitely) many channel uses.
In the -th channel use, let the transmit symbols at the EHU and the ET be modeled as RVs, denoted by and , respectively. Moreover, in channel use , let the received symbols at the EHU, the ET, and EVE be modeled as RVs, denoted by , , and , respectively. Furthermore, in channel use , let the RVs modeling the AWGNs at the EHU, the ET, and the EVE be denoted by , , and , respectively, such that , , and , where
denotes a Gaussian distribution with mean
and variance
. Moreover, let the RVs modeling the additive self-interferences at the EHU and the ET in channel use be denoted by and , respectively.
By using the notation defined above, the input-output relations describing the considered channel in channel use can be written as
Y1i =ViX2i+I1i+N1i, (1) Y2i =ViX1i+I2i+N2i, (2) Y3i =FiX1i+GiX2i+N3i. (3)
Ii-B Self-Interference Model
A general model for the self-interference at the EHU and the ET is given by [24]
I1i =M∑m=1~Q1,m(i)Xm1i, (4) I2i =M∑m=1~Q2,m(i)Xm2i, (5)
where is an integer and and model the -th component of the self-interference channel between the transmitter- and the receiver-ends at the EHU and the ET in channel use , respectively. As shown in [24], the components in (4) and (5) for which
is odd carry non-negligible energy and the remaining components carry negligible energy and therefore can be ignored. Furthermore, the higher order components carry less energy than the lower order terms. As a result, we can justifiably adopt the first-order approximation of the self-interference in (
4) and (5), and model and as
I1i =~Q1iX1i, (6) I2i =~Q2iX2i, (7)
where and are used for simplicity of notation. Thereby, the adopted self-interference model takes into account only the linear component of (4) and (5), i.e., the component for . The linear self-interference model has been widely used, e.g. in [24], [25].
By inserting (6) and (7) into (1) and (2), respectively, we obtain
Y1i =ViX2i+~Q1iX1i+N1i, (8) Y2i =ViX1i+~Q2iX2i+N2i. (9)
To model the worst-case of linear self-interference, we note the following. Since the ET knows which symbol it has transmitted in channel use , the ET knows the outcome of the RV , denoted by . As a result of this knowledge, the noise that the ET “sees” in its received symbol given by (9), is , where is a constant. Hence, the noise that the ET “sees”,
, will represent the worst-case of noise, under a second moment constraint, if and only if
is an independent and identically distributed (i.i.d.) Gaussian RV111This is due to the fact that the Gaussian distribution has the largest entropy under a second moment constraint, see [26].. Therefore, in order to derive results for the worst-case of linear self-interference, we assume that in the rest of the paper. Meanwhile,
is distributed according to an arbitrary probability distribution with mean
and variance .
Now, since and can be written equivalently as and , where and are the means of and , respectively, and and denote the remaining zero-mean components of and , respectively, we can write and in (8) and (9), respectively, as
Y1i =ViX2i+¯q1iX1i+Q1iX1i+N1i, (10) Y2i =ViX1i+¯q2iX2i+Q2iX2i+N2i. (11)
Since the ET always knows the outcome of ,
, and since given sufficient time it can always estimate the deterministic component of its self-interference channel,
, the ET can remove from its received symbol , given by (11), and thereby reduce its self-interference. In this way, the ET obtains a new received symbol, denoted again by , as
Y2i =ViX1i+Q2iX2i+N2i. (12)
Note that since in (12) changes independently from one channel use to the next, the ET cannot estimate and remove from its received symbol even though the ET knows the outcome of . Thus, in (12) is the residual self-interference at the ET where the ET knows the outcome of . On the other hand, since the EHU benefits from the self-interference, it does not remove from its received symbol , given by (10), in order to have a self-interference signal with a much higher energy, which it can then harvest. Hence, the received symbol at the EHU is given by (10).
In this paper, we are interested in the secrecy capacity of the channel characterised by the input-output relationships given by (10), (12), and (3).
Ii-C Energy Harvesting Model
The energy harvested by the EHU in channel use is given by [23]
Ein,i=η(ViX2i+¯q1X1i+Q1iX1i)2, (13)
where is the energy harvesting efficiency coefficient. For convenience, we have assumed unit time and thus we use the terms power and energy interchangeably in the sequel. The EHU stores in its battery, which is assumed to have an infinitely large storage capacity. Let denote the amount of harvested energy in the battery of the EHU at the end of the -th channel use. Moreover, let be the extracted energy from the battery in the -th channel use. Then, , can be written as
Bi=Bi−1+Ein,i−Eout,i. (14)
Since in channel use the EHU cannot extract more energy than the amount of energy stored in its battery at the end of channel use , the extracted energy from the battery in channel use , , can be obtained as
Eout,i=min{Bi−1,X21i+Pp}, (15)
where is the transmit energy of the desired transmit symbol in channel use , , and is the processing energy cost of the EHU [27]. The processing cost, , models the system level power consumption at the EHU, i.e., the energy spent due to the electrical components in the electrical circuit such as AC/DC convertors and RF amplifiers as well as the energy spent for processing. Note that the ET also requires energy for processing. However, the ET is assumed to be equipped with a conventional power source which is always capable of providing the processing energy without affecting the energy required for transmission.
Now, if the total number of channel uses satisfies , if the battery of the EHU has an unlimited storage capacity, and furthermore
E{Ein,i}≥E{X21i}+Pp (16)
holds, where denotes statistical expectation, then the number of channel uses in which the extracted energy from the battery is insufficient and thereby holds is negligible compared to the number of channel uses in which the extracted energy is sufficient for both transmission and processing [28]. In other words, when the above three conditions hold, in almost all channel uses, there will be enough energy to be extracted from the EHU’s battery for both processing, , and for the transmission of the desired transmit symbol , , and thereby holds.
Iii Upper Bound on the Secrecy Capacity
For the considered channel, we propose the following theorem which establishes an upper bound on the secrecy capacity.
Theorem 1
Assuming that the average power constraint at the ET is , an upper bound on the secrecy capacity of the considered channel is given by
maxp(x1|x2,v),p(x2|v)∑x2∈X2∑v∈VI(X1;Y2|X2=x2,V=v)p(x2|v)p(v) −∑v∈V∑g∈G∑f∈FI(X1;Y3|V=v,G=g,F=f)p(v)p(g)p(f) Subjectto C1:∑x2∈X2∑v∈Vx22p(x2|v)p(v)≤PET C2:∫x1∑x2∈X2∑v∈V(x21+Pp)p(x1|x2,v)p(x2|v)p(v)dx1≤ ∫x1∑x2∈X2∑v∈VEinp(x1|x2,v)p(x2|v)p(v)dx1 C3:∑x2∈X2p(x2|v)=1 C4:∫x1p(x1|x2,v)dx1=1, (17)
where denotes the conditional mutual information. In (17), lower-case letters , , , and represent realizations of the random variables , , , and , respectively, and their support sets are denoted by , , , and , respectively. Constraint C1 in (17) constrains the average transmit power of the ET to , and C2 is due to (16), i.e., due to the fact that EHU has to have harvested enough energy for both processing and transmission of symbol
. The maximum in the objective function is taken over all possible conditional probability distributions of
and , given by and , respectively.
Proof:
Please refer to Appendix A, where the converse is provided.
Iii-a Simplified Expression of the Upper Bound on the Secrecy Capacity
The optimal input distributions at the EHU and the ET that are the solutions of the optimization problem in (17) and the resulting simplified expressions of the upper bound on the secrecy capacity are provided by the following lemma.
Lemma 1
The optimal input distribution at the EHU, found as the solution of the optimization problem in (17), is zero-mean Gaussian with variance , i.e., , where can be found as the solution of
v2σ22+x22α2+(1+v2PEHU(x2,v)σ22+x22α2)∑f∈Ff2f2PEHU(x2,v)+σ23p(f) = (1+v2PEHU(x2,v)σ22+x22α2)λ2(1−η(¯g12+α1)), (18)
where is chosen such that C2 in (17) holds with equality.
On the other hand, the optimal input distribution at the ET, found as the solution of the optimization problem in (17), has the following discrete form
p(x2|v)=p(x2=0)δ(x2)+12J∑j=1p(x2=x2j)(δ(x2−x2j)+δ(x2+x2j)), (19)
where denotes the Dirac delta function. Finally, the simplified expression of the upper bound on the secrecy capacity in (17), denoted by , is given by
Cus=12∑v∈VJ∑j=1log(1+v2PEHU(x2,v)σ22+x22jα2)p(x2=x2j)p(v) +∑v∈V∑g∈G∑f∈F⎡⎢ ⎢⎣∫∞−∞1√2πσ2y3J∑j=1p(x2=x2j)e−(y3−x2j)22σ2y3 ×ln⎛⎜ ⎜⎝1√2πσ2y3J∑j=1p(x2=x2j)e−(y3−x2j)22σ2y3⎞⎟ ⎟⎠dy3−∫∞−∞1√2πσ23J∑j=1p(x2=x2j)e−(z−x2j)22σ23 ×ln⎛⎜ ⎜⎝1√2πσ23J∑j=1p(x2=x2j)e−(z−x2j)22σ23⎞⎟ ⎟⎠dz3⎤⎥ ⎥⎦p(v)p(g)p(f). (20)
Iv Lower Bound on the Secrecy Capacity - An Achievable Secrecy Rate
From Lemma 1, we can see that the upper bound on the secrecy capacity cannot be achieved since the EHU has to know in each channel use , in order for the EHU to calculate (1). In other words, the EHU can not adapt and the data rates of its codewords accordingly. The knowledge of at the EHU is not possible since the input distribution at the ET, given by (19), is discrete with a finite number of probability mass points. However, if we set the input distribution at the ET to be binary such that , , takes values from the set , then the EHU can know in each channel use since , , and therefore this rate can be achieved. Hence, to obtain an achievable lower bound on the secrecy capacity, we propose the ET to use the following input distribution
p(x2|v)=12(δ(x2−x0(v))+δ(x2+x0(v))). (21)
The value of will be determined in the following.
Iv-a Simplified Expression of the Lower Bound on the Secrecy Capacity
The simplified expression for the lower bound on the secrecy capacity resulting from the ET using the distribution given by (21), is provided by the following lemma.
Lemma 2
Let us define as
I(x)=2√2πxe−x2/2∫∞0e−y2/2xcosh(y)ln(cosh(y))dy. (22)
Depending on the channel qualities, we have three cases for the achievable secrecy rate.
Case 1: If the following conditions hold
12∑v∈Vlog(1+v2PEHU(x2,v)σ22+PETα2)p(v)+λ1PET = λ2((1−η(¯q12+α1))∑v∈VPEHU(x2,v)p(v)−ηPETΩV), (23)
and
12∑v∈Vlog(1+v2PEHU(x2,v)σ22+PETα2)p(v)>∑v∈V∑g∈G∑f∈F[12ln(2πeσ2y3) +PETf2PEHU(x2,v)+σ23−I⎛⎜ ⎜⎝√PET√f2PEHU(x2,v)+σ23⎞⎟ ⎟⎠ −12ln(2πeσ23)−PETσ23+I(√PETσ3)]p(v)p(g)p(f), (24)
where is the root of (1) for and , then the input distribution at the ET has the following form
p(x2|v)=12(δ(x2−√PET)+δ(x2+√PET)),∀v. (25)
On the other hand, the input distribution at the EHU is zero-mean Gaussian with variance , i.e., , where can be found as the solution of (1) for .
For Case 1, the achievable secrecy rate, denoted by , is given by
Cls=12∑v∈Vlog(1+v2PEHU(√PET,v)σ22+PETα2)p(v) +∑v∈V∑g∈G∑f∈F⎡⎢ ⎢⎣∫∞−∞12√2πσ2y3⎛⎜⎝e−(y3−√PET)22σ2y3+e−(y3+√PET)22σ2y3⎞⎟⎠ ×ln⎛⎜ ⎜⎝12√2πσ2y3⎛⎜⎝e−(y3−√PET)22σ2y3+e−(y3+√PET)22σ2y3⎞⎟⎠⎞⎟ ⎟⎠dy3 −∫∞−∞12√2πσ23⎛⎜⎝e−(z3−√PET)22σ23+e−(z3+√PET)22σ23⎞⎟⎠ ×ln⎛⎜⎝e−(z3−√PET)22σ23+e−(z3+√PET)22σ23⎞⎟⎠dz3⎤⎥⎦p(v)p(g)p(f). (26)
Case 2: If (2) does not hold, and
12∑v∈Vlog(1+v2PEHU(x2,v)σ22+x20(v)α2)p(v)>∑v∈V∑g∈G∑f∈F[12ln(2πeσ2y3) +x20(v)f2PEHU(x2,v)+σ23−I⎛⎜ ⎜⎝x0(v)√f2PEHU(x2,v)+σ23⎞⎟ ⎟⎠ −12ln(2πeσ23)−x20(v)σ23+I(x0(v)σ3)]p(v)p(g)p(f) (27)
holds, then the input distribution at the ET is given by
p(x2|v)=12(δ(x2−x0(v))+δ(x2+x0(v))), (28)
whereas the input distribution at the EHU is zero-mean Gaussian with variance . In this case, and are the roots of the system of equations comprised of (1) for and the following equation
12log(1+v2PEHU(x0(v),v)σ22+x20(v)α2)−λ1x20(v) = λ2((1−η(¯q12+α1))PEHU(x0(v),v)−ηv2x20(v)). (29)
For Case 2, the achievable secrecy rate is given by
Cls=12∑v∈Vlog(1+v2PEHU(x0(v),v)σ22+x20(v)α2)p(v) +∑v∈V∑g∈G∑f∈F⎡⎢ ⎢⎣∫∞−∞12√2πσ2y3⎛⎜⎝e−(y3−x0(v))22σ2y3+e−(y3+x0(v))22σ2y3⎞⎟⎠ ×ln⎛⎜ ⎜⎝12√2πσ2y3⎛⎜⎝e−(y3−x0(v))22σ2y3+e−(y3+x0(v))22σ2y3⎞⎟⎠⎞⎟ ⎟⎠dy3 −∫∞−∞12√2πσ23⎛⎜⎝e−(z3−x0(v))22σ23+e−(z3+x0(v))22σ23⎞⎟⎠ ×ln⎛⎜⎝e−(z3−x0(v))22σ23+e−(z3+x0(v))22σ23⎞⎟⎠dz3⎤⎥⎦p(v)p(g)p(f). (30)
Case 3: If neither (2) nor (2) holds, then, the achievable secrecy rate is .
Proof:
In order for C1 in (17) to hold, or equivalently for C1 in (B) to hold, there are two possible cases for . In Case 1, C1 in (B) is satisfied if is set to take values from the set . If (63) for does not hold, then is set to take values from the set , where is given by (2) in order for C1 in (B) to be satisfied. Now, since , where follows a Gaussian probability distribution, and is distributed according to (25) and (28) for Case 1 and Case 2, respectively, we obtain the expressions in (2) and (2) by using (B) and (B).
Lemma 2 gives insights into the achievability scheme of the derived lower bound on the secrecy capacity. When Case 1 of Lemma 2 holds, the achievability scheme is very simple. In particular, the ET only chooses between or in every channel use. When Case 2 of Lemma 2 holds, from (2) we see that the ET adapts its transmit power to the channel fading states of the EHU-ET channel, , and increases its transmit power when is larger, and conversely, it lowers its transmit power when is not as favourable. As for the EHU, we first note that, since the EHU knows the square of the transmit symbol of the ET in a given channel use, the EHU can adapt its transmit power and its rate in the given channel use according to the expected self-interference at the ET, which depends on the value of . Secondly, the EHU also takes advantage of the better channel fading states of the EHU-ET channel, , and increases its transmit power and rate when is larger, and conversely, it lowers its transmit power and rate when is not as strong. Thirdly, since is chosen such that constraint C2 in (17) holds, the transmit power of the EHU depends on the processing cost . Thereby, when Case 2 holds, the ET also takes into account the processing cost of the EHU.
Iv-B Achievability of the Lower Bound on the Secrecy Capacity
We set the total number of channel uses (i.e., symbols) to , where denotes the total number of time slots used for the transmission and denotes the number of symbols transmitted per time slot, where , , , and .
Let denote a set comprised of the time slots during which the EHU has enough energy harvested and thereby transmits a codeword, and let denote a set comprised of the time slots during which the EHU does not have enough energy harvested and thereby it is silent. Let and , where denotes the cardinality of a set.
Transmissions at the ET: During the channel uses of a considered time slot with fading realisation , the ET’s transmit symbol is drawn from the probability distribution given in Lemma 2. Thus, in each channel use of the considered time slot, the ET transmits either or with probability if Case 1 in Lemma 2 holds, or transmits or with probability if Case 2 in Lemma 2 holds.
Reception of Energy and Transmission of Information at the EHU: The EHU first generates all binary sequences of length , where
REHU=12∑v∈Vlog(1+v2PEHU(x2,v)σ22+x22α2)p(v), (31)
where and can be found from Lemma 1 depending on which case holds. Then the EHU uniformly assigns each generated sequence to one of groups, where is given by (2) for Case 1 of Lemma 2, or by (2) for Case 2 of Lemma 2. The confidential message drawn uniformly from the set is then assigned to a group. Next, the EHU randomly select a binary sequence from the corresponding group to which
is assigned, according to the uniform distribution. This binary sequence is then mapped to a codeword comprised of
symbols, which is to be transmitted in time slots. The symbols of the codeword are drawn according to a zero-mean, unit-variance Gaussian distribution. Next, the codeword is divided into blocks, where each block is comprised of symbols. The length of each block is assumed to coincide with a single fading realization, and thereby to a single time slot.
The EHU will transmit in a given time slot only when it has harvested enough energy both for processing and transmission in the given time slot, i.e., only when its harvested energy accumulates to a level which is higher than , where is the fading gain in the time slot considered for transmission. Otherwise, the EHU is silent and only harvests energy. When the EHU transmits, it transmits the next untransmitted block of symbols of its codeword. To this end, each symbol of this block is first multiplied by , where can be found from Lemma 2, and then the block of symbols is transmitted over the wireless channel to the ET. The EHU repeats this procedure until it transmits all blocks of its entire codeword for which it needs time slots.
Receptions at the ET: When the ET receives a transmitted block by the EHU, it checks if the power level of the received block is higher than the noise level at the ET or not. If affirmative, the ET places the received block in its data storage, without decoding. Otherwise the received block is discarded.
Now, in time slots, the ET receives the entire codeword transmitted by the EHU. In order for the ET to be able to decode the transmitted codeword, the rate of the transmitted codeword must be equal to or lower than the capacity of the EHU-ET’s channel, given by
CEHU−ET=12∑v∈Vlog(1+v2PEHU(x2,v)σ22+x22α2)p(v). (32)
Note that the rate of the transmitted codeword is , given by (31). Now, since , the ET is able to decode the codeword transmitted by the EHU. Next, since the ET knows the binary sequences corresponding to each group, by decoding the transmitted codeword the ET determines the group to which the transmitted codeword belongs to. As a result, the ET is able to decode the secret message .
In the time slots, the achieved secrecy rate is given by . It was proven in [28] that when the EHU is equipped with a battery with an unlimited storage capacity and when C2 in (17) holds, then as . Thereby, the achieved secrecy rate in time slots is given by , which is the actual lower bound of the channel secrecy capacity given by Lemma 2.
Receptions at the EVE: EVE receives the transmitted blocks by the EHU and the ET. Similarly to the ET, EVE places the received block in its data storage, without decoding.
In time slots, the EVE also receives the entire codeword transmitted by the EHU. In addition, EVE receives the signal from the ET, comprised of randomly generated symbols (see Lemma 2), which acts as noise to EVE and impairs the ability of EVE to decode the codeword from the EHU. To show that the EVE will not be able to decode the secret message, we use properties of the multiple access channel, resulting from the EHU and the ET transmitting at the same time. The multiple-access capacity region at the EVE formed by the transmission of the EHU and the ET is given by . The EVE will be able to decode the EHU’s codeword only if one of the following two cases holds, i.e., when or when , where is the entropy of the signal generated by the ET and is given by
RET=−[p(x2)log2p(x2)+p(x2)log2p(x2)]. (33)
In (33), , see lemma 2. As a result,
RET=log2=1. (34)
Case 1: For the EHU’s codeword to be decodable at the EVE in this case,
and have to hold. For we have
I(X2;Y3|V,F,G)=∑v∈V∑g | 2020-10-20 17:52:14 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363703489303589, "perplexity": 654.6115202338492}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874026.22/warc/CC-MAIN-20201020162922-20201020192922-00209.warc.gz"} |
https://bookdown.org/pkaldunn/Textbook/writing-scientifically-abstract.html | ## 37.5 Writing scientifically: Abstract
The Abstract is a short section at the start of an article which summarises the whole paper; it is not an introduction! An Abstract includes the most important and interesting parts of the research. The Abstract is often the most important part of any article, as it is the only part that many people will read.
Writing the Abstract after the paper is fully written is often sensible. Some (but not all) journals require a structured abstract, where the Abstract contains sections to be briefly completed (see Sect. 36.2). These abstracts are usually much easier for a reader to follow.
The Standards for Reporting Diagnostic Accuracy (STARD) statement list essential items for Abstracts; these are (slightly adapted):
• Background and Objectives: List the study objectives (the RQ).
• Methods: Describe:
• The process of data collection;
• The type of study;
• The inclusion and exclusion criteria for individuals;
• The settings in which the data were collected;
• The sampling method (e.g. random or convenience sample);
• The tools or methods used to collect the data.
• Results: Provide
• The number of individuals in all groups included in the analysis;
• Estimates of precision of estimates (e.g. confidence intervals);
• Results of analysis (e.g. hypothesis tests).
• Discussion: Provide
• A general interpretation of the results;
• Implications for practice, including the intended use of the index test;
• Limitations of the study.
These loosely align with the six steps of research used in this book.
Example 37.3 (Structured abstract) A research study examined the long-term effects of mortality after amptutation . The (structured) Abstract (slightly edited for brevity) is repeated below:
Background: Mortality after amputation is known to be extremely high and is associated with a number of patient features. We wished to calculate this mortality after first-time lower-limb amputation and investigate whether any population or treatment factors are associated with worse mortality.
Objective: To follow up individuals after lower limb amputation and ascertain the mortality rate as well as population or treatment features associated with mortality.
Study design: A prospective cohort study.
Methods: Prospective lower-limb amputations over 1 year ($$N=105$$) at a Regional Rehabilitation Centre were followed up for 3 years.
Results: After 3 years, 35 individuals in the cohort had died, representing a mortality of 33%. On initial univariate analysis, those who died were more likely to have diabetes mellitus ($$\chi^2 = 7.16$$, $$\text{df} = 1$$, $$p = 0.007$$) and less likely to have been fitted with a prosthesis ($$\chi^2 =5.84$$, $$\text{df}=1$$, $$p=0.016$$) […] Diabetes (odds ratio$${}=3.04$$, confidence intervals$${}=1.25-7.40$$, $$p=0.014$$) and absence of prosthesis-fitting (odds ratio$${}=2.60$$, confidence interval$${}=1.16-6.25$$, $$p=0.028$$) were independent predictors of mortality.
Conclusion: Mortality after amputation is extremely high and is increased in individuals with diabetes or in those who are not fitted with a prosthesis after amputation.
, p. 545
### References
Cohen JF, Korevaar DA, Gatsonis CA, Glasziou PP, Hooft L, Moher D, et al. STARD for abstracts: Essential items for reporting diagnostic accuracy studies in journal or conference abstracts. BMJ. British Medical Journal Publishing Group; 2017;358:j3751.
Singh RK, Prasad G. Long-term mortality after lower-limb amputation. Prosthetics and Orthotics International. 2016;40(5). | 2022-09-27 21:46:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.596872091293335, "perplexity": 5062.4148880064995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00059.warc.gz"} |
https://www.intechopen.com/books/cloud-computing-architecture-and-applications/green-aware-virtual-machine-migration-strategy-in-sustainable-cloud-computing-environments | Open access peer-reviewed chapter
# Green-Aware Virtual Machine Migration Strategy in Sustainable Cloud Computing Environments
By Xiaoying Wang, Guojing Zhang, Mengqin Yang and Lei Zhang
Submitted: April 7th 2016Reviewed: December 23rd 2016Published: June 14th 2017
DOI: 10.5772/67350
## Abstract
As cloud computing develops rapidly, the energy consumption of large-scale datacenters becomes unneglectable, and thus renewable energy is considered as the extra supply for building sustainable cloud infrastructures. In this chapter, we present a green-aware virtual machine (VM) migration strategy in such datacenters powered by sustainable energy sources, considering the power consumption of both IT functional devices and cooling devices. We define an overall optimization problem from an energy-aware point of view and try to solve it using statistical searching approaches. The purpose is to utilize green energy sufficiently while guaranteeing the performance of applications hosted by the datacenter. Evaluation experiments are conducted under realistic workload traces and solar energy generation data in order to validate the feasibility. Results show that the green energy utilization increases remarkably, and more overall revenues could be achieved.
### Keywords
• virtual machine migration
• resource management
• power management
• renewable energy aware
## 1. Introduction
Large-scale datacenters, as the key infrastructure of cloud environments, usually own massive computing and storage resources in order to provide online services for thousands of millions of customers simultaneously. This leads to significant energy consumption, and thus high carbon footprint will be produced. Recent reports estimate that the emissions brought by information and computing technologies grow from 2% in 2010 [1] to 8% in 2016 and will grow to 13% by 2027 [2]. Hence, considering the heavy emissions and increasing impact on climate change, governments, organizations, and also IT enterprises are trying to find cleaner ways to manage the datacenters, for example, exploiting renewable energy such as wind, solar, and tidal.
However, the intermittency and the instability of the renewable energy sources make it difficult to efficiently utilize them. Fortunately, we know that the datacenter workloads are usually variable, which give us opportunities to find ways to manage the resources and power together inside the datacenters to utilize renewable energy sources more efficiently. On the other hand, to provide guaranteed services for third-party applications, the datacenter is responsible of keeping the quality of service (QoS) at a certain level, subject to the service level agreements (SLAs) [3].
In modern datacenters, applications are often deployed in virtual machines (VMs). By virtualization mechanisms, VMs are flexible and easy to migrate across different servers in the datacenter. In this chapter, we attempt to conduct research on energy-aware virtual machine migration methods for power and resource management in hybrid energy-powered datacenters. Especially, we also employ thermal-aware ideas when designing VM migration approaches. The holistic framework is described, then the model is established, and heuristic and stochastic strategies are presented in detail. Experimental results show the effectivity and feasibility of the proposed strategies. We hope that this chapter would be helpful for researchers to study the features of VM workloads in the datacenter and find ways to utilize more green energy than traditional brown energy.
The remainder of this chapter is organized as follows. Section 2 introduces some relevant prior work in the field of energy-aware and thermal-aware resource and power management. Section 3 presents the entire system architecture we discuss in this chapter. Section 4 formulates the optimization problem corresponding to the issue we need to address. Section 5 describes the methods and strategies we designed to solve the problem. Section 6 illustrates the experimental results by comparing three different strategies, and finally conclusion is given out in Section 7, in which we also discuss about some of the possible future work.
## 2. Literature review
This section reviews the literature in the area of energy-aware resource management, thermal-aware power management, and green energy utilization in datacenters.
In the recent decade, many researchers started to focus on power-aware management methods to manage workload fluctuation and search trade-off between performance and power consumption. Sharma et al. [4] have developed adaptive algorithms using a feedback loop that regulates CPU frequency and voltage levels in order to minimize the power consumption. Tanelli et al. [5] controlled CPUs by dynamic voltage scaling techniques in Web servers, aiming at decreasing their power consumption. Berl et al. [6] reviewed the current best practice and progress of the energy efficient technology and summarized the remaining key challenges in the future. Urgaonkar et al. [7] employed queuing theory to make decision aiming at optimizing the application throughput and minimizing the overall energy costs. The above work attempts to reduce the power consumption while guaranteeing the system performance. On the basis of such ideas, we incorporate the usage of renewable energy into the optimization model, which might support performance improvement when the green energy is sufficient enough.
Besides, thermal-aware resource management approaches also attracted some interest of researchers recently. For example, Mukherjee et al. [8] developed two kinds of temperature-aware algorithms to minimize the maximum temperature in order to avoid hot spots. Tang et al. [9] proposed XInt which can schedule tasks to minimize the inlet temperatures and also to reduce the cooling energy costs. Pakbaznia et al. [10] combined chassis consolidation and efficient cooling together to save the power consumption while keeping the maximum temperature under a controlled level. Wang et al. [11] designed two kinds of thermal-aware algorithms aiming at lowering the temperatures and minimizing the cooling system power consumption. Islam et al. [12] proposed DREAM which can manage the resources to control allocate capacity to servers and distribute load considering temperature situations. Similarly, we consider the impact of temperature on two kinds of cooling devices in this chapter, which directly decide the cooling power consumption.
As renewable energy becomes more widely used in datacenters, corresponding research starts to put insights into green energy–oriented approaches for managing the resources and power. Deng et al. [13] treated carbon-heavy energy as a primary cost and designed some mechanisms to allocate resources on demand. Goiri et al. designed GreenSlot [14] aiming at scheduling batch workloads and GreenHadoop [15] which could deal with MapReduce-based tasks. Both of them tried to efficiently utilize green energy to improve the application performance. Li et al. [16] proposed iSwitch, which can switch the power supply between wind power and utility grid according to the renewable power variation. Arlitt et al. [17] defined the “Net-Zero energy” datacenter, which needs on-site renewable generators to offset the usage of power coming from the electricity grid. Deng et al. also conducted research on Datacenter Power Supply System (DPSS) and proposed an efficient, online control algorithm SmartDPSS [18] helping to make online decisions in order to fully leverage the available renewable energy and varying electricity prices from the grid markets, for minimum operational cost. Zhenhua et al. [19] presented a holistic approach that integrates renewable energy supply, dynamic pricing, cooling supply, and workload planning to improve the overall attainability of the datacenter.
Upon the basic concepts of these work, we exploit the possibility and of efficient VM migration management toward sufficiently utilizing renewable energy supply, incorporating the flexibility of transactional workloads, cooling power consumption, and the amount of available green energy.
## 3. Datacenter architecture
This section describes the datacenter architecture, including the hybrid power supply and virtualization infrastructure.
Figure 1 shows the system architecture of the sustainable datacenter powered by both renewable energy and traditional energy supplies. The grid utility and renewable energy are combined together by the automatic transfer switch (ATS) in order to provide power supply for the datacenter. Both functional devices and cooling devices have to consume power, as shown in the bottom part of the figure.
Figure 2 illustrates the infrastructure of virtualized cloud datacenter. As shown, the underlying infrastructure of the datacenter is comprised of many physical machines (PMs), which are placed onto groups of racks. The utility grid bus and the renewable energy bus are connected together to supply power for the datacenter devices. Renewable sources will be used first, and the grid power will be leveraged as the supplementary energy supply.
As mentioned before, virtual machines (VMs) are running on the underlying infrastructure as used to host multiple applications, as shown in the virtualization layer in Figure 2. Different VMs on the same PM might serve for different applications. In this chapter, we mainly discuss about transactional applications which needs CPU resources mostly, other than other types of resources.
## 4. Problem definition
This section defines necessary variables and also the problem we need to solve throughout this chapter.
### 4.1. Model of computing and service units
In the target problem, there are N heterogeneous physical machines in the virtualized cloud environment, and the available CPU resource capacity of PM i is denoted as Фi. The entire environment is hosting M kinds of different applications, deployed on M different VMs. Denote the jth VM as VMj. Then, denote xj as the index of the PM which is hosting VMj. Denote φi as the allocated CPU capacity to VMj and di as the demanded CPU capacity of application j at the current time slot.
### 4.2. Power consumption model
According to the mechanisms of dynamic voltage and frequency scaling (DVFS) techniques, here we use a simple power model which assumes that the power consumption of other components in the PM correlate well with CPU [20]. Denote pi as the power consumption of PM i in each time slot and piMAX as the maximum power consumption of PM i (100% occupied by workloads). Then, the following equation can be used to compute the PM power consumption:
pi=piMAX·c+1c·θiE1
where c is a constant number representing the ratio of the idle-state power consumption of a PM compared to the full-utilized-state power consumption [21] and θi is the current CPU utilization of PM i.
Besides, we also consider the power cost spent on cooling devices when establishing the power model, which is usually much related to temperature. The cooling system we discuss here consists of both the traditional computer room air conditioning (CRAC) unit and the air economizer. According to relevant studies [10], the coefficient of performance (CoP) is often used to indicate the efficiency of a cooling system, which can be computed by
CoP=1/kTsupTout,whenToutTsup0.0068Tsup2+0.0008Tsup+0.458,otherwiseE2
where k is a factor reflecting the difference between outside air and target temperature, Tsup is the target supply temperature, and Tout is the outside temperature. As it can be observed, Eq. (2) contains two parts, corresponding to the situation whether the CRAC or the air economizer will be used for cooling, respectively.
Hence, the total power consumed by both functional devices and cooling devices can be calculated by
pDC=1+1CoP·i=1NpiE3
Furthermore, considering the impact of environmental temperature inside the datacenter, we also tried to exploit thermal-aware VM migration strategies. The power consumption of the servers will make the surrounding environmental temperature increase, due to the dissipated heat. Prior studies [11] provided ways to model the vector of inlet temperatures Tin as
Tin=Ts+DpE4
where D is the heat transferring matrix, p is the power consumption vector, and Ts is the supplied air temperature vector.
The thermal-aware strategy tries to reduce the cooling power by balancing the temperature over the servers. Accordingly, the workload on different PMs should also be maintained balanced. Denote Tsafe as the safe outlet temperature and Tserver as the outlet temperature of the hottest server. In order to lower the server temperature to the safe level, the output temperature of cooling devices should be adjusted by Tadj = TsafeTserver. Then, the output temperature after adjusted will be Tnew = Tsup + Tadj. Hereafter, the CoP value can be determined by Tnew and Tout [22].
### 4.3. Modeling overhead and delay
To reduce the power consumption of the PM, it can be switched to sleeping state which can help save energy as much as possible. In addition, the operational costs also include the VM migration costs, since migrating VMs dynamically will definitely lead to some overhead. Denote ai as the flag recording whether PM i is active or sleeping. Denote cA as the cost for activating a PM from sleeping state and cMIG as the cost for migrating a VM from one PM to another. Besides, the time delay is also considered and integrated into the experiments in Section 6 for waking up a PM and migrating a VM.
### 4.4. Optimization problem formulation
From the resource providers’ point of view, the objective should be maximizing the total revenues by meeting the requirements of the hosted applications while minimizing the consumed power and other costs. Usually, the revenues from hosting the applications are related to service quality and the predefined level in the SLA. Assume here that the service quality is reflected by the CPU capacity scheduled to the target application. Denote dj as the demanded CPU capacity of APP j and φj as the CPU capacity amount scheduled to APP j. Denote Ωj() as the profit model for APP j, which gives the actual revenue by serving APP j at a certain quality level.
Since the dynamic action decisions are made during constant time periods, denote τ as the length of one time slot. Denote t as the current time slot, and then in time slot t+1, the goal is to maximize the net revenue subject to various constraints. Denote xj as the index of PM currently hosting VM j, and then the VM placement vector X can be denoted as
X=x1,x2,...,xj,,xME5
Hence, the optimizing objective of the defined problem can be expressed as
maxj=1MΩjdj,φjcP·pDCcA·i=1Nmax0,ait+1 aitcMIG·j=1Mxjt+1xjtE6
where the first term is the total revenue summarized over all of the hosted applications, the second term represents the power consumption costs of the entire datacenter, the third term is the PM wake-up cost, and the last term represents the VM migration cost.
With respect to the objective defined above, the constraints could be expressed as
xj=iφjФi·aiE7
0φjdj,j=1,2,,ME8
ai0,1,xj1,NE9
where Eq. (7) means that the allocated capacity cannot exceed the PM CPU capacity, Eq. (8) means that the CPU scheduled to a VM should be less than its demanded value, and Eq. (9) gives the validated ranges of the defined variables.
## 5. Methods and strategies
In this section, we design some heuristic methods and also the joint hybrid strategy, and describe the ideas in detail.
### 5.1. Dynamic load balancing (DLB)
The idea of the DLB strategy is to make the workload on different PMs balanced by dynamically placing VMs. To achieve the balancing effect, if one PM is detected to be more utilized than the specified upper threshold, some VMs on this PM will be chosen to migrate otherwhere. As a result, the PM utilization ratio will be controlled in a certain range, and there will be as few overloaded PMs as possible.
### 5.2. Dynamic VM consolidation (DVMC)
According to the features of virtualization techniques, VMs could be consolidated together onto a few PMs to make other PMs zero loaded. Hence, the main idea of the DVMC strategy is to consolidate VMs as much as possible aiming at saving more power. Both the upper threshold and the lower threshold of the PM utilization level are defined. If one PM is light loaded enough that its utilization is less than the lower threshold, the VM consolidation process will be triggered. After this process, VMs upon underutilized PM will be migrated onto other PMs. Finally, zero-loaded PMs could be turned into inactivate state in order to save more power.
### 5.3. Joint optimal planning (JOP)
The JOP strategy aims to optimize the VM placement scheme with the objective of sufficiently utilizing the renewable energy and reducing the total costs.
#### 5.3.1. Renewable energy forecasting
Since renewable energy is used as one source of power supply, we have to forecast the input power value in the next time slot. Here the k-nearest neighbor (k-NN) algorithm is adopted. A distance weight function is designed to calculate the distance each solar radiation values, as follows:
wi=1/di/1/d1+1/d2++1/dkE10
where di is the distance between the ith neighbor and the current point.
Figure 3 shows the forecasting effect on one day in October 2013. The data were measured and collected in Qinghai University, Xining, Qinghai Province of China. By analyzing the data points, the allowed absolute percentage errors (AAPE) of 97.01% data are less than 30%. The accuracy of the prediction method depends on the similar weather conditions in the recent past and may be affected by weather forecast data.
#### 5.3.2. Stochastic search
In order to look for the best scheme of VM placement, we use stochastic search to do the optimization. Specifically, the genetic algorithm (GA) is modified and employed as follows:
For a typical genetic algorithm, there are two basic items as follows:
1. A genetic representation of solution space
Here, for this problem, the decision variable is the vector of VM placement, which can be denoted as X = (x1, x2 …, xM).
1. A fitness function to compute the value of each solution
As described, the objective function defined by Eq. (6) could be used as the fitness function. It is functional in measuring the quality of a certain solution. Hereafter, the fitness function will be denoted as F(X).
The procedure of genetic algorithm can be divided into following steps:
1. Initialization
First, we add the current configuration vector in the last time epoch into the initial generation. Besides, a fixed number (denoted as ng) of individual solutions will be randomly generated. Specifically, a part of the elements of each solution will be generated randomly, in the range of 0~N−1.
1. Selection
After initialization, the generations will be produced successively. For each generation, nb best-ranking individuals from the current and past population will be selected to breed a new generation. Then, in order to keep the population constant, the remained individuals will either be removed or replicated based on its quality level. The selection procedure is conducted based on fitness, which means that solutions with higher fitness values are more prone to be selected.
According to such concepts, the probability to select an individual Xi can be calculated as
PXi=FXik=1ngFXiE11
In this way, less fit solutions will be less likely to be selected, and this helps to keep the diversity of the population and to keep away from premature convergences on poor solutions.
1. Reproduction
After selection, a second generation of population should be generated from those selected solutions through two kinds of genetic operators: crossover and mutation.
The crossover operator first selects two different individuals, denoted as X1=x11,x21,,xM1and X2=x12,x22,,xM2. Then, a cutoff point k is set from the range 1~M. Both X1 and X2 are divided into two halves, and the second half of them will be swapped and then X1=x11,x21,xk1,xk+12,,xM2and X2=x12,x22,,xk2,xk+11,,xM1. As a result, two new individuals will come out, which is perhaps already in the current population or not.
After crossover, the mutation operator will mutate each individual with a certain probability. The mutation process starts by randomly choosing an element in the vector and then changing its value, and then converts an individual into another.
1. Termination
This production process will repeat again and again until the number of generations reaches to a predefined level.
## 6. Evaluation results
This section shows our experiments comparing different strategies, and then the results and some details will be discussed.
### 6.1. Parameter settings
For the following experiments, we used C#.NET to develop the simulation environment and set up the prototype test bed. Specifically, a virtualized datacenter is established, comprised of 40 PMs with CPU capacity of 1500 MIPS each. For the power model, piMAXis set to 259W according to Ref. [23] and, c is set to 66% according to Ref. [21]. Then, 100 VMs hosting different applications were simulated and put on the PMs. The workload on each VM fluctuates with time, with the value randomly generated under the uniform distribution.
Table 1 shows all of the parameter settings in detail, and Figure 4 shows variation of the total CPU demand summarized over all of the workloads, from which it can be seen that there are two peaks in the 24-h period.
APP 1APP 2APP 3
Lower bound504030
Upper bound906070
Ujmax1006080
### Table 1.
Parameter settings for example applications.
We defined a nonlinear revenue function for each application, as mentioned in Section 4. Figure 5 shows some three typical examples. It can be seen that the revenue of every application changes elastically in a certain range.
The control interval for reconfiguration actions in the experiment is set to 60 minutes. According to Refs. [2426], we set cP to $0.08, set cA to$0.00024, and set cMIG to \$0.00012. The VM migration delay is set to 5s, and the PM wakeup delay is set to 15s. The total experiment time is set to 1440 minutes. The temperature data used in the experiments come from the realistic data measured on 4 October 2013, recorded in the campus of the Qinghai University, Xining, Qinghai Province, China, as shown in Figure 6.
### 6.2. Results
In order to investigate the effectiveness of the proposed strategy, we will compare the performance among three different strategies – DLB, DVMC, and JOP, as stated in Section 5.
#### 6.2.1. Revenues
As described in Section 4, the net revenue is a main optimizing objective in our problem. Figure 7 shows the total accumulated net revenues throughout the 1440-min experiment time. It can be observed that the JOP strategy can keep the net revenue relatively higher than other ones. Moreover, the DVMC approach behaves relatively better than DLB since it can save more power by VM consolidation. By examining the detailed data, we found that JOP could make the gains 38.2 and 24.2% higher than DLB and DVMC, respectively, with respect to the accumulated revenue.
#### 6.2.2. Power consumption
Now we intend to investigate the power consumption in detail when using JOP, as Figure 8 illustrates. It can be observed from the figure that JOP is able to follow the solar energy variation quite well. When the solar power drops to insufficient level, JOP is prone to degrade the application performance to save more power. On the contrary, when the solar power arises, JOP allows both functional and cooling devices to consume more power, under the constraints of the input power. Interestingly, we can see that the temperature varies more or less in coincidence with solar energy generation, which implies that thermal-aware coscheduling of energy supply and consumption might be promising, since the temperature also affects energy consumption to some extent.
#### 6.2.3. PM Management
Figure 9 shows the number of active servers when using the three different strategies. We can see that JOP can increase or decrease the number of active servers according to the variation of the solar power generation amount. Under the DLB strategy, all PMs are kept active so that the system-wide workload could be balanced. Comparatively, DVMC uses much fewer active PMs than DLB due to VM consolidation. However, it still uses more PMs at night time because it cannot effectively deal with the relationship of revenues and costs. Overall, JOP tries to manage PMs dynamically toward the optimization objective and thus can keep the number of active PMs as needed.
#### 6.2.4. Energy for cooling
The cooling energy consumption is also investigated when using the three different strategies, as shown in Figure 10. As illustrated, JOP allows cooling devices to consume more power until after 18:00, showing its capability of catching the solar energy variation. By forecasting the solar power generation amount, JOP is able to make better decisions for migrating VMs according to the optimized scheme.
## 7. Conclusion and future work
As the energy consumption of large-scale datacenters becomes significant and attracts more attentions, renewable energy is being exploited by more enterprises and cloud providers to be used as a supplement of traditional brown energy. In this chapter, we introduced the target system environment using hybrid energy supply mixed with both grid energy and renewables. From the datacenter’s own point of view, the optimization problem was defined aiming at maximizing net revenues. Accordingly, three different strategies were designed to migrate VMs across different PMs dynamically, among which the JOP strategy could leverage stochastic search to help the optimization process. Results illustrate the feasibility and effectiveness of the proposed strategy and further investigation about the accumulated revenues, PM states, and cooling power consumption helps us to see more details of the working mechanisms of the proposed strategy.
As datacenters become larger and larger and thus enormous amount of energy is still needed to power these datacenters, it can be expected that green sources of energy will attract more insights to provide power supplies instead of traditional brown energy. Our work tries to explore some strategies to migrate VMs inside a datacenter in a green-aware way. Nevertheless, there are still a lot of challenges in the field of leveraging sustainable energy to power the datacenters. On one hand, more kinds of clean energy sources besides wind and solar could be exploited, such as hydrogen and fuel cell, and their features should be studied and developed. On the other hand, how to synthetically utilize the battery, utility grid, and datacenter loads to solve the intermittency and fluctuation problems of the energy sources remains a difficult problem for system designers. In addition, it is also necessary and interesting to conduct some research on the air flow characteristics among racks and server nodes inside the datacenter room and develop some thermal-aware scheduling approaches correspondingly.
## Acknowledgments
This work is partially supported in part by National Natural Science Foundation of China (No. 61363019, No. 61563044, and No. 61640206) and National Natural Science Foundation of Qinghai Province (No. 2014-ZJ-718, No. 2015-ZJ-725).
chapter PDF
Citations in RIS format
Citations in bibtex format
## More
© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
## How to cite and reference
### Cite this chapter Copy to clipboard
Xiaoying Wang, Guojing Zhang, Mengqin Yang and Lei Zhang (June 14th 2017). Green-Aware Virtual Machine Migration Strategy in Sustainable Cloud Computing Environments, Cloud Computing - Architecture and Applications, Jaydip Sen, IntechOpen, DOI: 10.5772/67350. Available from:
### chapter statistics
1Crossref citations
### Related Content
Next chapter
#### M-ary Optical Computing
By Jian Wang and Yun Long
#### Applied Cryptography and Network Security
Edited by Jaydip Sen
First chapter
#### Secure and Privacy-Preserving Authentication Protocols for Wireless Mesh Networks
By Jaydip Sen
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities. | 2021-03-07 12:50:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6281047463417053, "perplexity": 1596.229748403913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376467.86/warc/CC-MAIN-20210307105633-20210307135633-00113.warc.gz"} |
https://www.gradesaver.com/textbooks/science/physics/CLONE-afaf42be-9820-4186-8d76-e738423175bc/chapter-17-section-17-2-phase-changes-got-it-page-319/17-2 | Essential University Physics: Volume 1 (4th Edition)
It is equal to 100 degrees Celsius. After all, at standard temperature and pressure, the boiling point of water is 100 degrees Celsius. Once it reaches this point, the water will no longer continue to increase in temperature; rather, the additional energy will go to turning the water molecules into vaporized $H_2 O$. | 2021-02-28 03:36:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1784953624010086, "perplexity": 309.55131508222723}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360107.7/warc/CC-MAIN-20210228024418-20210228054418-00190.warc.gz"} |
https://physicsoverflow.org/44351/probability-of-finding-a-particle-outside-its-light-cone | # Probability of finding a particle outside it's light cone
+ 1 like - 0 dislike
234 views
Say we just created a particle (high probability of one-particle state), is the probability of a very far away detector getting triggered at the time of creation (probability of finding a particle outside of its light cone) zero according to QFT?
Since we can detect particles and make histograms of the positions where they're found using detectors, this seems like a reasonable question to ask. I hope that QFT says that detector cannot detect particles outside its lightcone because if that's not the case, we can imagine an experiment where information can be sent FTL:
Consider a ridiculous amount of hydrogen atoms/electrons near person A and a very far away B measuring the rate of particles he detects. So when A makes some movement, if the probability outside light cone changes immediately, B's rate of detection immediately changes and hence this can be used for communication.
If you say that the probability outside light cone doesn't change immediately, that leads us to a grave situation. Assume A himself has a detector, and if he sweeps/detects most of the particles (sweeps through the high probability region, peaks). It makes zero sense to say B observes a same rate of particle detection since particles are already 'used' up by A.
(Some clarification: When I say particles near A, I mean by this is that we intuitively have an idea that particles/fields must have some kind of probability distribution. It is reasonable to assume that there is some peak in distribution of atoms/electrons of my phone in my hand and probability of electrons of phone's atoms is extremely low far away. So even if QFT doesn't have position operator or whatever, it should somehow be able to talk about this.)
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsO$\varnothing$erflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. | 2022-10-07 08:20:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6348710656166077, "perplexity": 1010.155174727225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00768.warc.gz"} |
https://math.stackexchange.com/questions/780969/number-of-binary-strings-containing-at-least-n-consecutive-1/781265 | # Number of binary strings containing at least n consecutive 1
Let $Z_{m, n, q}$ be the number of binary strings (ordered lists of 0's and 1's) of length $m$, containing exactly $q$ 1's and at least $n$ consecutive 1's at any part of the string. I'm trying to find a formula for this number that is easily calculated (by a computer). I've managed to find the following recurrence relation:
$$Z_{m+1, n, q} = Z_{m, n, q} + Z_{m, n, q-1} + \binom{m-n}{q-n} - Z_{m-n, n, q-n}$$ Explanation:
Let us call the strings we want to count good strings. The first term contains all good strings that end with a 0. The second terms contains all good strings that end with a 1 and have n consecutive 1's in the first $m$ elements of the string. The third term contains all good string which have n consecutive 1's in the last n elements of the string. Finally the fourth term accounts for the overlap between the second and third terms, by removing all good strings which end with n consecutive 1's but also have n consecutive 1's in the first m-n elements.
Is this formula correct? I think it is, but I also have the feeling that it is overly complicated, and have no clue how to simplify it (or tackle the recurrence relation). Could anyone help me?
PS: I'm interested in this formula because I am trying recreate this graph: http://wizardofodds.com/gambling/betting-systems/martingale.gif . Don't worry, I know you can't beat the house edge, just curious to see how it changes for unfair coins and numbers.
• Is $p+q=m$ here? If not then what is $p$? May 4, 2014 at 16:49
• Sorry, $p$ should have been $q$. To clarify, the third term counts only the good strings which end with exactly $n$ 1's. Hence the last $n+1$ elements are $011\ldots1$. Then we know there must be $(m+1) - (n+1)$ other elements that contain $q-n$ 1's, because there are $q$ 1's in total. So there are $\binom{m-n}{q-n}$ such strings. The fourth term then removes the good strings that were already included in the second term.
– G.L.
May 4, 2014 at 16:59
Lemma 1: (balls and sticks)The number of non-negative integer solutions of $$x_1+x_2+\dots +x_a=b$$ is $$\binom{b+a-1}{a-1}$$
Lemma 2: Let $$R(b,a,k)$$ number of non-negative integer solutions of $$x_1+x_2+\dots +x_a=b$$ where each $$0\le x_i. By inclusion-exclusion $$R(b,a,k)=\sum_{i=0}^{\lfloor b/k\rfloor}(-1)^i\binom ai\binom{b-ki+a-1}{a-1}$$
Solution: The problem is equivalent to finding the number of non-negative solutions to $$x_1+x_2+\dots +x_{m-q+1}=q$$ such that there is at least one part is not less than $$n$$. Hence, $$\binom{q+(m-q+1)-1}{(m-q+1)-1}-R(q,m-q+1,n)$$ gives the required answer. | 2022-08-11 04:59:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6191510558128357, "perplexity": 184.58513513381658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00460.warc.gz"} |
https://shelah.logic.at/papers/1217/ | # Sh:1217
• Asgharzadeh, M., Golshani, M., & Shelah, S. Graphs represented by Ext. Preprint. arXiv: 2110.11143
• Abstract:
Daniel Herden asked for which graph (\mu,R) we can find a family \{G_\alpha\} of Abelian groups such that Ext(G_\alpha,G_\beta) = 0 iff (\alpha,\beta) \in R. We show that this is always possible for bipartite graphs in ZFC. We also give a consistent positive answer for the general case.
• Version 2021-10-14 (32p)
Bib entry
@article{Sh:1217,
} | 2022-05-22 03:50:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688559174537659, "perplexity": 3644.815729217591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00401.warc.gz"} |
https://www.waterstones.com/book/fourier-analysis-in-convex-geometry/alexander-koldobsky/9780821837870 | Fourier Analysis in Convex Geometry - Mathematical Surveys and Monographs No. 116 (Hardback)
(author)
£57.95
Hardback Published: 15/04/2005
• Not available
This product is currently unavailable.
The study of the geometry of convex bodies based on information about sections and projections of these bodies has important applications in many areas of mathematics and science. In this book, a new Fourier analysis approach is discussed. The idea is to express certain geometric properties of bodies in terms of Fourier analysis and to use harmonic analysis methods to solve geometric problems.One of the results discussed in the book is Ball's theorem, establishing the exact upper bound for the $(n-1)$-dimensional volume of hyperplane sections of the $n$-dimensional unit cube (it is $\sqrt{2}$ for each $n\geq 2$). Another is the Busemann-Petty problem: if $K$ and $L$ are two convex origin-symmetric $n$-dimensional bodies and the $(n-1)$-dimensional volume of each central hyperplane section of $K$ is less than the $(n-1)$-dimensional volume of the corresponding section of $L$, is it true that the $n$-dimensional volume of $K$ is less than the volume of $L$? (The answer is positive for $n\le 4$ and negative for $n>4$.) The book is suitable for graduate students and researchers interested in geometry, harmonic and functional analysis, and probability. Prerequisites for reading this book include basic real, complex, and functional analysis.
Publisher: American Mathematical Society
ISBN: 9780821837870
Weight: 528 g
Dimensions: 263 x 183 x 15 mm
Edition: Illustrated edition
£8.99
Paperback
£23.99
Paperback
£20.00
Paperback
£19.99
Paperback
£15.99
Paperback
£60.99
Paperback
£41.99
Hardback
£15.99
Paperback
£18.99
Paperback
£79.99
Hardback
£17.99
Paperback
£11.99
Paperback
£51.99
Paperback
£29.99
Paperback | 2018-11-19 00:39:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7421883940696716, "perplexity": 238.6867579482263}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744803.68/warc/CC-MAIN-20181119002432-20181119023552-00047.warc.gz"} |
http://math.stackexchange.com/questions/17405/division-of-other-curves-than-circles | # Division of Other curves than circles
The coordinates of an arc of a circle of length $\frac{2pi}{p}$ are an algebraic number, and when $p$ is a Fermat prime you can find it in terms of square roots.
Gauss said that the method applied to a lot more curves than the circle. Will you please tell if you know any worked examples of this (finding the algebraic points on other curves)?
-
Am I correct, or are you saying that there exists a a rational number $p$ such that $\frac{2\pi}{p}$ is algebraic over the rationals?? – Asaf Karagila Feb 15 '11 at 18:36
I think he might mean the coordinates of the end points of the arc. It's easy to see from De Moivre's formula. Maybe a more appropriate word would be to use the word constructible? – Raskolnikov Feb 15 '11 at 19:09
A circle is defined by the algebraic curve $x^2+y^2=1$ and the line of slope 1/p is given by the curve $x-py=0$. So the intersection is a (1 dimensional) affine variety over the rationals, hence its points lie in the algebraic numbers. Not sure if that's the kind of thing you're asking about. I'm not sure about the best way of generalizing the statement about finding it in terms of square roots. – George Lowther Feb 15 '11 at 21:30
Chapter 3 of Prasolov/Solovyev may be of interest. – J. M. Apr 19 '11 at 18:06
Apparently the same exercise can be done for the lemniscate with the same result. For instance, see http://www.jstor.org/stable/2321821 where Theorem 2 states that
If the lemniscate can be divided in n parts with ruler and compass, then n is a power of two times a product of distinct Fermat primes.
The main difficulty, when compared to the better known theorem about the circle, appears to be the shift from circular functions (sin, cos) to elliptic functions. For instance one requires some sort of addition theorem for these functions.
This is only one more curve, but one that can be associated to the important elliptic integral $\int \frac{dt}{\sqrt{1-t^4}}$, making an appearance as the arc-length of the lemniscate. I'm guessing there is a wide class of curves that are associated to elliptics integrals this way, but I doubt that any of them would naturally be as interesting as the circle or the lemniscate.
-
See this. In particular, the Gauss-Wantzel Theorem says the following:
Theorem. A regular $n$-gon can be constructed with compass and straightedge iff
• $n$ is a Fermat prime
• $n$ is a power of $2$
• $n$ is a product of of a power of $2$ and distinct Fermat primes
-
sorry my question is really about curves other than the circle. Good theorem though. – quanta Jan 13 '11 at 18:15
To flesh out the comment I gave: Prasolov and Solovyev mention an example due to Euler and Serret: consider the plane curve with complex parametrization
$$z=\frac{(t-a)^{n+2}}{(t-\bar{a})^n (t+i)^2}$$
where $a=\frac{\sqrt{n(n+2)}}{n+1}-\frac{i}{n+1}$ and $n$ is a positive rational number.
The arclength function for this curve is $s=\frac{2\sqrt{n(n+2)}}{n+1}\arctan\,t$; since
$$\arctan\,u+\arctan\,v=\arctan\frac{u+v}{1-u v}$$
the division of an arc of this curve can be done algebraically (with straightedge and compass for special values).
Here are plots of these curves for various values of $n$:
Serret also considered curves whose arclengths can be expressed in terms of the incomplete elliptic integral of the first kind $F(\phi|m)$; I'll write about those later once I figure out how to plot these... (but see the Prasolov/Solovyev book for details)
- | 2016-05-01 04:46:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8129503726959229, "perplexity": 254.8797274995292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860114285.32/warc/CC-MAIN-20160428161514-00027-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/integral-derivative-problem-solving-question.116622/ | # Integral/Derivative Problem Solving Question
tangents
Code:
----------------------------------------------
| Distance x in cm | 0 | 1 | 5 | 6 | 8 |
-------------------|-----|----|----|----|----|
| Temp t(x) in °C | 100 | 93 | 70 | 62 | 55 |
----------------------------------------------
Metal wire is 8 cm. and heated at one end. distance x is how far from the heated end you are. The function ti s decreasing and twice differentiable.
a) estimate t'(7)
b) write an integral expression of T(x) for avg temp of wire. estimate avg temp of wire using trapezoidal sum with 4 subintervals indicated by data
c) Find int up limit 8 lower limit 0. explain the meaning of this in the context of the problem
d) Is that data consistent with teh assertion that T''(x)>0 for every x from 0 to 8. explain
Last edited:
## Answers and Replies
Gold Member
Do you have any specific questions regarding the problem?
Are you stuck on one particular question?
tangents
i dont get the whole thing
tangents
bump.........
Jeff Ford
What have you tried so far?
tangents
for part a, I tried to get the slope from 6-8, which is the derivative and plug in 7 but i am not sure if that;s correct because it came out to be -24.5
Staff Emeritus
Gold Member
This is interesting becuase I've just plotted your results and found that they are pratically linear
-Hoot
tangents
yeah that's basically what i did to estimate the slope at 7 but -24 just doesnt seem right
Staff Emeritus | 2022-10-05 06:38:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7023637294769287, "perplexity": 1251.8347012729903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00379.warc.gz"} |
https://socratic.org/questions/how-many-electrons-does-copper-have | # How many electrons does copper have?
Oct 12, 2016
If you are doing your chemistry/physics homework, there should be a Periodic Table by your side. The Periodic Table tells me that $Z$, the atomic number for copper metal, is $29$.
What does this mean? It means that there are $29$ protons, 29 massive, positively charged particles in the element's nucleus. This specific number of protons defines the atom as a $\text{copper atom}$.
And thus if there 29 positive charges in the nucleus, there must also be 29 negative charges associated with the neutral atom, and these are supplied by the electron. And so (finally!), the number of electrons is $29$ for the neutral copper atom; 29 electrons whizz about the copper nucleus. | 2019-03-23 01:00:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.448569655418396, "perplexity": 611.5089468437682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202704.58/warc/CC-MAIN-20190323000443-20190323022443-00274.warc.gz"} |
https://mathhelpboards.com/threads/significant-figures-law-of-cosines.5623/ | # TrigonometrySignificant figures: Law of cosines
#### sweatingbear
##### Member
We have a right triangle with sides of length 735 and 420 m where the intermediate angle of aforementioned sides is 50°. The task is simple: Calculate |AB|. Here is a picture:
Upon calculating |AB|, we arrive at approximately 565.48 m. My problem is how many significant figures one ought to have in the answer. General practice is to have as many digits as the given data with least amount of significant digits.
But in this case, how many digits can we view the data 420 and 50 respectively to have? Integral values with zeros preceding the decimal point can at times be quite ambiguous. It would not make any sense to say "9 000 000" has seven significant digits since it is given without context.
So, 420 either has two significant digits or three; 50 either has one significant digit or two. 565.48 with three significant digits is 565, two significant digits 570 and one significant digit 600. My intuition tells me to answer 565, but I am really not sure which cases of significant digits to confidently rule out.
#### Klaas van Aarsen
##### MHB Seeker
Staff member
We have a right triangle with sides of length 735 and 420 m where the intermediate angle of aforementioned sides is 50°. The task is simple: Calculate |AB|. Here is a picture:
View attachment 1024
Upon calculating |AB|, we arrive at approximately 565.48 m. My problem is how many significant figures one ought to have in the answer. General practice is to have as many digits as the given data with least amount of significant digits.
But in this case, how many digits can we view the data 420 and 50 respectively to have? Integral values with zeros preceding the decimal point can at times be quite ambiguous. It would not make any sense to say "9 000 000" has seven significant digits since it is given without context.
So, 420 either has two significant digits or three; 50 either has one significant digit or two. 565.48 with three significant digits is 565, two significant digits 570 and one significant digit 600. My intuition tells me to answer 565, but I am really not sure which cases of significant digits to confidently rule out.
I suggest to err on the side of caution.
The least significant is the $50^\circ$, which without any extra information we should interpret as $50.0 \pm 0.5^\circ$.
As a result the answer would be approximately $565 \pm 5\text{ m}$.
To err on the side of caution, I would indeed write this down as $565\text{ m}$, which leaves the actual precision still somewhat ambiguous, but at least not as less than the original measurements.
Note that in actual lab work (where precision is important), the standard error of each measurement is recorded.
The way the errors propagate is analyzed and the final results are reported with a specification of the expected error.
If we assume errors of $\pm 0.5\text{ m}$ respectively $\pm 0.5^\circ$, analysis shows that the final error would be $\pm 3.7\text{ m}$.
Last edited:
#### sweatingbear
##### Member
Frankly the problem is merely a simple trigonometry problem and no such where the considering of measurement uncertainty, specifying error bounds or similar courses of actions are necessary to heed (although debatable, but I digress).
So you would take it that 50 is the data with least significant digits? All right I am with you on that, but the significant figures are ambiguous. One significant digit is, in my eyes, an exaggerated (and rather erroneous) approximation of the length. It seems two significant digits is the way to go, but my gut still would have wanted me to answer 565 as opposed to 570.
#### Klaas van Aarsen
##### MHB Seeker
Staff member
To properly specify a precision of 2 digits without going into specifying error bounds, you're supposed to write $5.7 \cdot 10^2\text{ m}$. | 2020-09-23 19:09:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8464851975440979, "perplexity": 618.1116347079625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212039.16/warc/CC-MAIN-20200923175652-20200923205652-00258.warc.gz"} |
https://ftp.aimsciences.org/article/doi/10.3934/cpaa.2013.12.2797 | # American Institute of Mathematical Sciences
November 2013, 12(6): 2797-2809. doi: 10.3934/cpaa.2013.12.2797
## Analytic integrability for some degenerate planar systems
1 Department of Mathematics, University of Huelva, 21071-Huelva 2 Departament de Matemàtica, Universitat de Lleida, Avda. Jaume II, 69. 25001. Lleida.
Received October 2012 Revised February 2013 Published May 2013
In the present paper we characterize the analytic integrability around the origin of a family of degenerate differential systems. Moreover, we study the analytic integrability of some degenerate systems through the orbital reversibility and from the existence of a Lie's symmetry for these systems. The results obtained for this family are similar to the results obtained when we characterize the analytic integrability of non-degenerate and nilpotent systems. The obtained results can be applied to compute the analytic integrable systems of any particular family of degenerate systems studied.
Citation: Antonio Algaba, Cristóbal García, Jaume Giné. Analytic integrability for some degenerate planar systems. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2797-2809. doi: 10.3934/cpaa.2013.12.2797
##### References:
show all references
##### References:
[1] Alexandr Mikhaylov, Victor Mikhaylov. Dynamic inverse problem for Jacobi matrices. Inverse Problems & Imaging, 2019, 13 (3) : 431-447. doi: 10.3934/ipi.2019021 [2] Armin Lechleiter, Tobias Rienmüller. Factorization method for the inverse Stokes problem. Inverse Problems & Imaging, 2013, 7 (4) : 1271-1293. doi: 10.3934/ipi.2013.7.1271 [3] Hildeberto E. Cabral, Zhihong Xia. Subharmonic solutions in the restricted three-body problem. Discrete & Continuous Dynamical Systems - A, 1995, 1 (4) : 463-474. doi: 10.3934/dcds.1995.1.463 [4] Michel Chipot, Mingmin Zhang. On some model problem for the propagation of interacting species in a special environment. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020401 [5] Fritz Gesztesy, Helge Holden, Johanna Michor, Gerald Teschl. The algebro-geometric initial value problem for the Ablowitz-Ladik hierarchy. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 151-196. doi: 10.3934/dcds.2010.26.151 [6] Gloria Paoli, Gianpaolo Piscitelli, Rossanno Sannipoli. A stability result for the Steklov Laplacian Eigenvalue Problem with a spherical obstacle. Communications on Pure & Applied Analysis, 2021, 20 (1) : 145-158. doi: 10.3934/cpaa.2020261 [7] Xiaoming Wang. Quasi-periodic solutions for a class of second order differential equations with a nonlinear damping term. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 543-556. doi: 10.3934/dcdss.2017027 [8] Rongchang Liu, Jiangyuan Li, Duokui Yan. New periodic orbits in the planar equal-mass three-body problem. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 2187-2206. doi: 10.3934/dcds.2018090 [9] Marion Darbas, Jérémy Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934/ipi.2020056 [10] Jaume Llibre, Luci Any Roberto. On the periodic solutions of a class of Duffing differential equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (1) : 277-282. doi: 10.3934/dcds.2013.33.277 [11] Yanqin Fang, Jihui Zhang. Multiplicity of solutions for the nonlinear Schrödinger-Maxwell system. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1267-1279. doi: 10.3934/cpaa.2011.10.1267 [12] Deren Han, Zehui Jia, Yongzhong Song, David Z. W. Wang. An efficient projection method for nonlinear inverse problems with sparsity constraints. Inverse Problems & Imaging, 2016, 10 (3) : 689-709. doi: 10.3934/ipi.2016017 [13] Olena Naboka. On synchronization of oscillations of two coupled Berger plates with nonlinear interior damping. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1933-1956. doi: 10.3934/cpaa.2009.8.1933 [14] Wenmin Gong, Guangcun Lu. On coupled Dirac systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4329-4346. doi: 10.3934/dcds.2017185 [15] Wolf-Jüergen Beyn, Janosch Rieger. The implicit Euler scheme for one-sided Lipschitz differential inclusions. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 409-428. doi: 10.3934/dcdsb.2010.14.409 [16] Irena PawŃow, Wojciech M. Zajączkowski. Global regular solutions to three-dimensional thermo-visco-elasticity with nonlinear temperature-dependent specific heat. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1331-1372. doi: 10.3934/cpaa.2017065 [17] Nhu N. Nguyen, George Yin. Stochastic partial differential equation models for spatially dependent predator-prey equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 117-139. doi: 10.3934/dcdsb.2019175 [18] Bin Pei, Yong Xu, Yuzhen Bai. Convergence of p-th mean in an averaging principle for stochastic partial differential equations driven by fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 1141-1158. doi: 10.3934/dcdsb.2019213 [19] Haiyan Wang. Existence and nonexistence of positive radial solutions for quasilinear systems. Conference Publications, 2009, 2009 (Special) : 810-817. doi: 10.3934/proc.2009.2009.810 [20] Tuvi Etzion, Alexander Vardy. On $q$-analogs of Steiner systems and covering designs. Advances in Mathematics of Communications, 2011, 5 (2) : 161-176. doi: 10.3934/amc.2011.5.161
2019 Impact Factor: 1.105 | 2021-02-28 22:17:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6212266087532043, "perplexity": 4895.083568541113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361776.13/warc/CC-MAIN-20210228205741-20210228235741-00223.warc.gz"} |
https://new.rosettacommons.org/docs/latest/application_documentation/stepwise/stepwise-options | # Inheritance Structure
StepWiseBasicOptions StepWiseMoveSelectorOptions
| | |
| v v
| StepWiseMonteCarloOptions
v
StepWiseBasicModelerOptions StepWiseProteinModelerOptions StepWiseRNA_ModelerOptions
| _______________|_______________________________|
| |
v v
StepWiseModelerOptions
*Yes I know about potential issues with multiple inheritance, but I think they're avoided here, and the alternative solutions requires remembering to copy a huge number of options from class to class.
### Note on spawning a StepWiseModelerOptions from StepWiseMonteCarloOptions
Some default values are different for StepWiseModelerOptions when it is created in StepWiseMonteCarlo vs. when it is needed for its original enumeration role in stepwise assembly (SWA). To handle this, StepWiseMonteCarloOptions can generate the appropriate StepWiseModelerOptions through the function setup_modeler_options() -- you've got to be a little careful that these are setup correctly.
There are also some options redundant between StepWiseModelerOptions and StepWiseMonteCarloOptions that might be better grouped into a more basic class -- not too hard to do, just have to be careful about it.
# Current Smorgasbord of Options
The following is just a subset of the options that are technically available to your use; this list has been curated based on what has been well-validated. Some options that mostly played a role with legacy code or extra debugging output have also been omitted.
• -stepwise: options
• -stepwise:fixed_res -- A selection of residues, provided as integers, that must not be allowed to move during minimization
• -stepwise:num_random_samples -- How many random samples should be taken forward to minimization from the StepwiseSampleAndScreen process? Default is 20.
• -stepwise:max_tries_multiplier_for_ccd -- By what factor should stepwise multiply the above option value for moves requiring CCD loop closure (i.e., proteins)? Default is 10.
• -stepwise:atr_rep_screen -- Do we want to screen generated conformations to ensure that distinct partitions (for example, starting residues vs. those being built) have some minimal amount of good attractive interactions but no clashes? Default true.
• -stepwise:atr_rep_screen_for_docking -- The same as atr_rep_screen, but only applies to docking moves
• -stepwise:align_pdb -- A structure that typically contains a subset of the native structure, to which to constrain the modeling problem (using coordinate constraints on each atom, leaving un-penalized any distance up to -rmsd_screen)
• -stepwise:new_align_pdb -- Similar to the above, but the penalty is based on the all-atom RMSD to the -new_align_pdb structure, resulting in a penalty that grows much more gently and naturally. Use with -set_weights alignment 1.0 to turn on the scoring term that enforces this constraint.
• -stepwise:enumerate -- Force enumeration on every move instead of the selection of -stepwise:num_random_samples random samples (default false).
• -stepwise:preminimize -- Only performn the premininimization stage (intended as a quick check; default false)
• -stepwise:skip_preminimize -- Totally skip preminimization (default false) but otherwise proceed through the simulation as normal
• -stepwise:minimize_waters -- In the stepwise mode that explicitly models and hydrates any magnesium ions present, pre-minimizes the waters (default false)
• -stepwise:test_all_moves -- Quickly test all possible moves coming from the starting pose, recursing through additions (default false)
• -stepwise:use_green_packer -- By default stepwise uses 'rotamer trials' to pack sidechains or O2' hydrogens; with this flag true, it will use a packer
• -stepwise:rmsd_screen -- In the presence of -align_pdb, -new_align_pdb, or -native, this option controls the tightness (in Angstroms) of either all-atom coordinate constraints or a direct evaluation of a penalty function on the RMSD.
• -stepwise:skip_minimize -- Skips initialminimization, but still prepacks (default false)
• -stepwise:superimpose_over_all -- Superimposes over all residues (all input plus all built residues); default true
• -stepwise:alignment_anchor_res -- If you pass -superimpose_over_all false, you should supply this option: a residue (as chain:resnum) that defines an input domain over which superposition should happen.
• -stepwise:move -- A single move to execute in Stepwise Monte Carlo. Format is like 'ADD A:5 BOND_TO_PREVIOUS A:4'
• -stepwise:output_minimized_pose_list -- Output all minimized poses after each move. Default to true in stepwise assembly legacy code, but false for SWM.
• -stepwise:virtualize_free_moieties_in_native -- Virtualize any groups in the native pose that aren't making any detectable contacts. This omits them from RMSD calculations.
• -stepwise:lores -- Instead of minimizing after every move, do fragment insertion moves with the coarse-grained energy function. Also adds a bunch of base pairs as submotifs to the SubMotifLibrary. Default false.
• -stepwise:definitely_virtualize -- Specified by integer seqpos, particular residues from the native that should be virtualized even if they are making contacts. (This helps sometimes to compare slightly dissimilar stepwise runs.)
• stepwise:monte_carlo: options
• -stepwise:monte_carlo:cycles -- Number of Monte Carlo cycles to conduct (default 50). 'Production' runs should probably use 200-2000 depending on problem difficulty. Very large problems may require 5-10000.
• -stepwise:monte_carlo:temperature -- Temperature of Monte Carlo simulation (default 1.0).
• -stepwise:monte_carlo:skip_deletions -- For testing, skip any delete moves (default false)
• -stepwise:monte_carlo:allow_internal_hinge_moves -- Allow moves where internal residues are sampled freely, causing a hinge like motion in an entire chain (default true).
• -stepwise:monte_carlo:allow_internal_local_moves -- Allow internal moves where residues are sampled then closed with KIC (default true).
• -stepwise:monte_carlo:allow_skip_bulge -- Allow moves that skip possibly 'bulged residues' instead modeling the subsequent residue as being connected by a jump (default false).
• -stepwise:monte_carlo:skip_bulge_frequency -- The rate at which 'skip bulge' moves are proposed, as a fraction of 'normal' add moves (default 0.0)
• -stepwise:monte_carlo:from_scratch_frequency -- Allows modeling of 'free' dinucleotides, thereby creating a new 'other_pose' (default 0.1).
• -stepwise:monte_carlo:allow_split_off -- Allow the separation of chunks of instantiated RNA into a new 'other_pose' (default true).
• -stepwise:monte_carlo:add_proposal_density_factor -- Increase/decrease the proposal_density_ratio for add moves by this factor (default 1.0).
• -stepwise:monte_carlo:add_delete_frequency -- Controls the relative frequency of add/delete moves versus resample moves (default 0.5).
• -stepwise:monte_carlo:docking_frequency -- The frequency of moves to dock different domains versus sample folding (intramolecular) degrees of freedom (default 0.2)
• -stepwise:monte_carlo:submotif_frequency -- The frequency to add a 'submotif', which is essentially a pre-made ideal segment of RNA whose addition can be detected from sequence alone, e.g., a UA_handle (default 0.2).
• -stepwise:monte_carlo:allow_submotif_split -- Allow submotifs to be split (so, for example, one residue can be deleted with the other remaining). This breaks detailed balance (default false).
• -stepwise:monte_carlo:force_submotif_without_intervening_bulge -- Only add submotifs if both ends can be chain-connected immediately (one attachment; one closed cutpoint); do not permit a bulge to follow one residue (default false).
• -stepwise:monte_carlo:use_first_jump_for_submotif -- Stepwise -lores reads in a bunch of jumps for the SubMotifLibrary; this flag ensures that only the first conformation for every base pair can be selected. Helps get more submotif moves for base pairs that are slightly less common (default false).
• -stepwise:monte_carlo:exclude_submotifs -- Exclude specific submotifs from the list in database/sampling/rna/submotif/submotifs.txt; useful if you want to do a retrospective modeling challenge where you want to use submotifs, but nothing taken from the PDB you're modeling
• -stepwise:monte_carlo:minimize_single_res_frequency -- Frequency to minimize only the added residue rather than all minimization-active residues (default 0.0).
• -stepwise:monte_carlo:allow_variable_bond_geometry -- Allow bond angles and distances to change in 10% of moves (default true, but only available through legacy minimizer).
• -stepwise:monte_carlo:switch_focus_frequency -- Frequency at which we change which input chunk of RNA is being actively modeled (default 0.5)
• -stepwise:monte_carlo:just_min_after_mutation_frequency -- For mutation moves, how frequently should dof sampling be skipped (default 0.5)
• -stepwise:monte_carlo:local_redock_only -- The ResampleMover can change which residues, between two docked chains, are assigned as the jump partners. This flag (default true) ensures that the new residues have to be within 8.0A of the old ones.
• -stepwise:monte_carlo:make_movie -- Output the trial and accepted state for every cycle of Monte Carlo into 'movie' output files (default false).
• -stepwise:monte_carlo:recover_low -- Output the lowest energy model sampled, rather than the last frame (default true).
• -stepwise:monte_carlo:use_precomputed_library -- Makes FROM_SCRATCH moves sample dinucleotide conformations from a library on disk rather than explicitly (default true).
• -stepwise:monte_carlo:vary_loop_length_frequency -- So, if you have a stretch of M 'n's in your fasta file (that is, you're doing design on M residues), in theory maybe you are okay with up to M residues for that loop. -vary_loop_length_frequency allows these loops to shorten (default 0.0).
• -stepwise:monte_carlo:designing_with_noncanonicals -- If 'n' can mean more than just four nucleotides, we need to work through a very different code-path, so this possibility has to be specified (there is a hardcoded possible universe of noncanonicals to work with). This needs work; ideally, we would just use resfile language here.
• -stepwise:monte_carlo:checkpointing_frequency -- Controls how often to output .checkpoint files. The default (every 100 cycles) is probably fine.
• -stepwise:monte_carlo:full_model_constraints -- Constraints that only make sense in the context of the full model pose. These constraints are read in by the StepWiseModeler every cycle and applied if and only if the residue 'already exists'.
• -stepwise:monte_carlo:csa: options (these control the special Conformational Space Annealing job distributor and don't do anything unless it is active)
• -stepwise:monte_carlo:csa:csa_bank_size -- Providing this flag activates the CSA job distributor, and instructs it to keep a 'bank' of this many models (default 0).
• -stepwise:monte_carlo:csa:csa_rmsd -- RMSD cutoff below which two Poses are considered 'the same' (thereby keeping only the lower energy example in the bank) (default 1.0).
• -stepwise:monte_carlo:csa:csa_output_rounds -- Output silent files at intermediate stages (all the integral multiples of the -csa_bank_size) (default false).
• -stepwise:monte_carlo:csa:annealing -- Actually do RMSD annealing, per the original concept of CSA, rather than obeying the fixed csa_rmsd. The original papers suggested using 10 rounds to move from half the average distance between the models that filled the first bank to one-fifth of that distance (default false).
• stepwise:polar_hydrogens: options
• stepwise:polar_hydrogens:vary_polar_hydrogen_geometry -- Optimize the bond geometry of any hydrogens forming hydrogen bonds (default false).
• stepwise:polar_hydrogens:bond_angle_sd_polar_hydrogen -- If the above is true, what should be the constraint minimum for the hydrogen bond angle? (default 60.0).
• stepwise:polar_hydrogens:bond_torsion_sd_polar_hydrogen -- If the above is true, what should be the constraint minimum for the hydrogen bond torsion? (default 30.0).
• stepwise:polar_hydrogens:fix_lengths -- Don't let bond lengths move at all (default false).
• stepwise:polar_hydrogens:fix_angles -- Don't let bond angles move at all (default false).
• stepwise:polar_hydrogens:fix_torsions -- Don't let bond torsions move at all (default false).
• stepwise:polar_hydrogens:disallow_pack_polar_hydrogens -- Don't initially pack polar hydrogens before minimizing (default false).
• stepwise:polar_hydrogens:disallow_vary_geometry_proton_chi -- Omit the 2'-OH from the above considerations (default false), i.e., just do base polar hydrogens.
• stepwise:protein: options
• -stepwise:protein:global_optimize -- Always cluster/pack/minimize over all residues (default false).
• -stepwise:protein:disable_sampling_of_loop_takeoff -- Do'nt sample psi of the N-terminal residue or phi of the C-terminal residue relative to a loop of moving residues.
• -stepwise:protein:n_sample -- Number of samples on every backbone torsion angle (default 18). Because RESAMPLE moves can affect multiple residues, setting this much higher than 36 becomes explosively slow.
• -stepwise:protein:cart_min -- Use the cartesian minimizer (it's recommended to have -set_weights cart_bonded 1.0 ring_close 0.0 pro_close 0.0 on your command line for your scoring function if you do this)
• -stepwise:protein:use_packer_instead_of_rotamer_trials -- Much as -use_green_packer for RNA, this flag ensures that sidechains are packed using a proper packer algorithm rather than rotamer trials (default false)
• -stepwise:protein:expand_loop_takeoff -- Also sample an additional pair of residues on each side of the loop.
• -stepwise:protein:allow_virtual_side_chains -- On proteins, SWM allows the virtualization of side chains, since these artificial loop problems often lead to highly penalized exposed residues. Bad per-residue scores makes it hard to get residues added with reasonable reference energies. Letting side chains be virtual when they're not making any contacts helps this a lot (default true).
• full_model: options (generally have to do with designation of specific residues from the "full modeling problem" to have special behaviors; specify all residues for these cases as chain:resnum)
• -full_model:global_seq_file -- A fasta-formatted file with the 'global sequence.' Essentially, the full target modeling problem may nonetheless be a subset extracted from a larger RNA structure like a whole ribosome, for speed. But if you want to calculate the secondary structure partition function you want to use the whole, monomeric RNA. This lets you do that.
• -full_model:cutpoint_open -- Residues that, even once the model will be finished, will be open cutpoints. For example, all chain endings have this trait by defailt.
• -full_model:cutpoint_closed -- Residues that, even once the model will be finished, will be closed cutpoints. Places where numbering jumps because a loop has been closed with a shorter length than before might have this property.
• -full_model:cyclize -- Pairs of residues: the first is a 3' terminus of one chain, and the second is the 5' terminus of that same chain. They are given closed cutpoint variants and scored by the chainbreak scoring terms.
• -full_model:twoprime -- Pairs of residues: the first is any residue with a free 2' OH, and the second is the 5' terminus of some chain. They are given closed cutpoint variants (well, essentially) and scored by a scoring term analagous to chainbreak.
• -full_model:fiveprime_cap -- Residues that need to have a 5' cap applied with a corresponding 7-methyl guanosine.
• -full_model:jump_res -- Explicit residue specification of good places for jumps (rigid body offsets)
• -full_model:disulfide_res -- Explicit residue specification of where disulfides might need to form during the course of simulation (useful for protein and peptide modeling problems).
• -full_model:extra_min_res -- Residues (other than those that are being built) that should be reminimized every cycle.
• -full_model:extra_min_jump_res -- Jumps (specified by their residue termini) that should be reminimized every cycle.
• -full_model:root_res -- Specify a preferred root for your modeling problem (testing only).
• -full_model:sample_res -- Specify residues that must be sampled. Useful when you are providing a starting structure with residues you would nonetheless like to see deleted and resampled.
• -full_model:calc_rms_res -- The residues over which RMSD should be calculated. Not in wide use outside of SWA; which usually overrides this with its own impression of what's reasonable (depending on the situation, it's "all moving residues" or based on -superimpose_over_all)
• -full_model:working_res -- All residues that are going to be built. By default, this is all input PDBs plus all sample_res (which would include everything listed in the fasta file, too).
• -full_model:motif_mode -- Ensures for fixed residue problems that the closing base pair of every helix is -extra_minimize_res and that stacking is disabled for any terminal residues. Defaults to false, but passing this flag is a good starting point for a 'trial run'; you may then want to refine your own personalized selection of -extra_minimize_res, -terminal_res, and -block_stack_*_res.
• -full_model:allow_jump_in_numbering -- Doesn't assume a cutpoint in cases where residue numbers are nonconsecutive; particularly useful for design scenarios (default false).
• -full_model:rna: options
• -full_model:rna:terminal_res -- Residues that cannot stack during sampling, in either direction
• -full_model:rna:block_stack_above_res -- Residues to which special 'repulsive-only' atoms are added to prevent stacking 'above' the base. The 3'-most residue of a helix that does not make a coaxial stack could have this variant.
• -full_model:rna:block_stack_below_res -- Residues to which special 'repulsive-only' atoms are added to prevent stacking 'below' the base. The 5'-most residue of a helix that does not make a coaxial stack could have this variant.
• -full_model:rna:force_syn_chi_res_list -- Residues whose chi1 (the glycosidic torsion) must be 'syn'. Anti samples are just omitted by the sampler.
• -full_model:rna:force_anti_chi_res_list -- Residues whose chi1 (the glycosidic torsion) must be 'anti'. Syn samples are just omitted by the sampler.
• -full_model:rna:force_north_sugar_list -- Residues whose sugar pucker is forced to be 'north'.
• -full_model:rna:force_south_sugar_list -- Residues whose sugar pucker is forced to be 'south'.
• -full_model:rna:bulge_res -- Residues that should be made into a 'bulge variant' rather than built explicitly.
• -full_model:rna:sample_sugar_res` -- Residues that, despite having been provided as a fixed chunk of RNA, have sugars that should be resampled.
Go back to StepWise Overview. | 2022-01-16 22:00:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6285693049430847, "perplexity": 7441.785286801523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00163.warc.gz"} |
http://orbi.ulg.ac.be/browse?type=author&value=Absil,%20Olivier%20p003348 | Advanced search
Browse ORBi by ORBi project The Open Access movement
ORBi is a project of
References of "Absil, Olivier" in Complete repository Arts & humanities Archaeology Art & art history Classical & oriental studies History Languages & linguistics Literature Performing arts Philosophy & ethics Religion & theology Multidisciplinary, general & others Business & economic sciences Accounting & auditing Production, distribution & supply chain management Finance General management & organizational theory Human resources management Management information systems Marketing Strategy & innovation Quantitative methods in economics & management General economics & history of economic thought International economics Macroeconomics & monetary economics Microeconomics Economic systems & public economics Social economics Special economic topics (health, labor, transportation…) Multidisciplinary, general & others Engineering, computing & technology Aerospace & aeronautics engineering Architecture Chemical engineering Civil engineering Computer science Electrical & electronics engineering Energy Geological, petroleum & mining engineering Materials science & engineering Mechanical engineering Multidisciplinary, general & others Human health sciences Alternative medicine Anesthesia & intensive care Cardiovascular & respiratory systems Dentistry & oral medicine Dermatology Endocrinology, metabolism & nutrition Forensic medicine Gastroenterology & hepatology General & internal medicine Geriatrics Hematology Immunology & infectious disease Laboratory medicine & medical technology Neurology Oncology Ophthalmology Orthopedics, rehabilitation & sports medicine Otolaryngology Pediatrics Pharmacy, pharmacology & toxicology Psychiatry Public health, health care sciences & services Radiology, nuclear medicine & imaging Reproductive medicine (gynecology, andrology, obstetrics) Rheumatology Surgery Urology & nephrology Multidisciplinary, general & others Law, criminology & political science Civil law Criminal law & procedure Criminology Economic & commercial law European & international law Judicial law Metalaw, Roman law, history of law & comparative law Political science, public administration & international relations Public law Social law Tax law Multidisciplinary, general & others Life sciences Agriculture & agronomy Anatomy (cytology, histology, embryology...) & physiology Animal production & animal husbandry Aquatic sciences & oceanology Biochemistry, biophysics & molecular biology Biotechnology Entomology & pest control Environmental sciences & ecology Food science Genetics & genetic processes Microbiology Phytobiology (plant sciences, forestry, mycology...) Veterinary medicine & animal health Zoology Multidisciplinary, general & others Physical, chemical, mathematical & earth Sciences Chemistry Earth sciences & physical geography Mathematics Physics Space science, astronomy & astrophysics Multidisciplinary, general & others Social & behavioral sciences, psychology Animal psychology, ethology & psychobiology Anthropology Communication & mass media Education & instruction Human geography & demography Library & information sciences Neurosciences & behavior Regional & inter-regional studies Social work & social policy Sociology & social sciences Social, industrial & organizational psychology Theoretical & cognitive psychology Treatment & clinical psychology Multidisciplinary, general & others Showing results 1 to 20 of 265 1 2 3 4 5 6 On-sky performance of the QACITS pointing control technique with the Keck/NIRC2 vortex coronagraphHuby, Elsa ; Bottom, Michael; Femenia, Bruno et alin Astronomy and Astrophysics (in press)A vortex coronagraph is now available for high contrast observations with the Keck/NIRC2 instrument at L band. Reaching the optimal performance of the coronagraph requires fine control of the wavefront ... [more ▼]A vortex coronagraph is now available for high contrast observations with the Keck/NIRC2 instrument at L band. Reaching the optimal performance of the coronagraph requires fine control of the wavefront incident on the phase mask. In particular, centering errors can lead to significant stellar light leakage that degrades the contrast performance and prevents the observation of faint planetary companions around the observed stars. It is thus critical to correct for the possible slow drift of the star image from the phase mask center, generally due to mechanical flexures induced by temperature and/or gravity field variation, or to misalignment between the optics that rotate in pupil tracking mode. A control loop based on the QACITS algorithm for the vortex coronagraph has thus been developed and deployed for the Keck/NIRC2 instrument. This algorithm executes the entire observing sequence, including the calibration steps, initial centering of the star on the vortex center and stabilisation during the acquisition of science frames. On-sky data show that the QACITS control loop stabilizes the position of the star image down to 2.4 mas rms at a frequency of about 0.02 Hz. However, the accuracy of the estimator is probably limited by a systematic error due to a misalignment of the Lyot stop with respect to the entrance pupil, estimated to be on the order of 4.5 mas. A method to reduce the amplitude of this bias down to 1 mas is proposed. The QACITS control loop has been successfully implemented and provides a robust method to center and stabilize the star image on the vortex mask. In addition, QACITS ensures a repeatable pointing quality and significantly improves the observing efficiency compared to manual operations. It is now routinely used for vortex coronagraph observations at Keck/NIRC2, providing contrast and angular resolution capabilities suited for exoplanet and disk imaging. [less ▲]Detailed reference viewed: 27 (3 ULg) Resolved astrometric orbits of ten O-type binariesLe Bouquin, J.-B.; Sana, H.; Gosset, Eric et alin Astronomy and Astrophysics (in press)Our long term aim is to derive model-independent stellar masses and distances for long period massive binaries by combining apparent astrometric orbit with double-lined radial velocity amplitudes (SB2 ... [more ▼]Our long term aim is to derive model-independent stellar masses and distances for long period massive binaries by combining apparent astrometric orbit with double-lined radial velocity amplitudes (SB2). We follow-up ten O+O binaries with AMBER, PIONIER and GRAVITY at the VLTI. Here, we report about 130 astrometric observations over the last 7 years. We combine this dataset with distance estimates to compute the total mass of the systems. We also compute preliminary individual component masses for the five systems with available SB2 radial velocities. Nine over the ten binaries have their three dimensional orbit well constrained. Four of them are known colliding wind, non-thermal radio emitters, and thus constitute valuable targets for future high angular resolution radio imaging. Two binaries break the correlation between period and eccentricity tentatively observed in previous studies. It suggests either that massive star formation produce a wide range of systems, or that several binary formation mechanisms are at play. Finally, we found that the use of existing SB2 radial velocity amplitudes can lead to unrealistic masses and distances. If not understood, the biases in radial velocity amplitudes will represent an intrinsic limitation for estimating dynamical masses from SB2+interferometry or SB2+Gaia. Nevertheless, our results can be combined with future Gaia astrometry to measure the dynamical masses and distances of the individual components with an accuracy of 5 to 15\%, completely independently of the radial velocities. [less ▲]Detailed reference viewed: 25 (3 ULg) Structure of Herbig AeBe disks at the milliarcsecond scale. A statistical survey in the H band using PIONIER-VLTILazareff, B.; Berger, J.-P.; Kluska, J. et alin Astronomy and Astrophysics (in press)Context. It is now generally accepted that the near-infrared excess of Herbig AeBe stars originates in the dust of a circumstellar disk. Aims. The aims of this article are to infer the radial and vertical ... [more ▼]Context. It is now generally accepted that the near-infrared excess of Herbig AeBe stars originates in the dust of a circumstellar disk. Aims. The aims of this article are to infer the radial and vertical structure of these disks at scales of order one au, and the properties of the dust grains. Methods. The program objects (51 in total) were observed with the H-band (1.6micron) PIONIER/VLTI interferometer. The largest baselines allowed us to resolve (at least partially) structures of a few tenths of an au at typical distances of a few hundred parsecs. Dedicated UBVRIJHK photometric measurements were also obtained. Spectral and 2D geometrical parameters are extracted via fits of a few simple models: ellipsoids and broadened rings with azimuthal modulation. Model bias is mitigated by parallel fits of physical disk models. Sample statistics were evaluated against similar statistics for the physical disk models to infer properties of the sample objects as a group. Results. We find that dust at the inner rim of the disk has a sublimation temperature Tsub~1800K. A ring morphology is confirmed for approximately half the resolved objects; these rings are wide delta_r>=0.5. A wide ring favors a rim that, on the star-facing side, looks more like a knife edge than a doughnut. The data are also compatible with a the combination of a narrow ring and an inner disk of unspecified nature inside the dust sublimation radius. The disk inner part has a thickness z/r~0.2, flaring to z/r~0.5 in the outer part. We confirm the known luminosity-radius relation; a simple physical model is consistent with both the mean luminosity-radius relation and the ring relative width; however, a significant spread around the mean relation is present. In some of the objects we find a halo component, fully resolved at the shortest interferometer spacing, that is related to the HAeBe class. [less ▲]Detailed reference viewed: 17 (4 ULg) Characterizing exoplanetary atmospheres with a mid-infrared nulling spectrographDefrere, Denis ; Léger, Alain; Absil, Olivier et alPoster (2017, March 07)The discovery of an increasing number of terrestrial planets around nearby stars marks the dawn of a new era in the exoplanet field: the characterization and understanding of their atmospheres. To make ... [more ▼]The discovery of an increasing number of terrestrial planets around nearby stars marks the dawn of a new era in the exoplanet field: the characterization and understanding of their atmospheres. To make significant progress, it becomes clear that a large number of exoplanetary atmospheres have to be studied at various wavelengths. This is particularly relevant for identifying possible bio-signatures. In this poster, we present a concept of a space-based mid-infrared nulling spectrograph that can characterize a large number of exoplanetary atmospheres and provide key information on their size, surface temperature, and the presence of key molecules such as CO2, H2O, CH4 and O3. The proposed mission concept would be particularly suited to characterize Proxima Cen b. [less ▲]Detailed reference viewed: 14 (1 ULg) VLT/SPHERE robust astrometry of the HR8799 planets at milliarcsecond-level accuracy. Orbital architecture analysis with PyAstrOFitWertz, Olivier; Absil, Olivier ; Gómez González, Carlos et alin Astronomy and Astrophysics (2017), 598HR8799 is orbited by at least four giant planets, making it a prime target for the recently commissioned Spectro-Polarimetric High-contrast Exoplanet REsearch (VLT/SPHERE). As such, it was observed on ... [more ▼]HR8799 is orbited by at least four giant planets, making it a prime target for the recently commissioned Spectro-Polarimetric High-contrast Exoplanet REsearch (VLT/SPHERE). As such, it was observed on five consecutive nights during the SPHERE science verification in December 2014. We aim to take full advantage of the SPHERE capabilities to derive accurate astrometric measurements based on H-band images acquired with the Infra-Red Dual-band Imaging and Spectroscopy (IRDIS) subsystem, and to explore the ultimate astrometric performance of SPHERE in this observing mode. We also aim to present a detailed analysis of the orbital parameters for the four planets. We report the astrometric positions for epoch 2014.93 with an accuracy down to 2.0 mas, mainly limited by the astrometric calibration of IRDIS. For each planet, we derive the posterior probability density functions for the six Keplerian elements and identify sets of highly probable orbits. For planet d, there is clear evidence for nonzero eccentricity ($e \simeq 0.35$), without completely excluding solutions with smaller eccentricities. The three other planets are consistent with circular orbits, although their probability distributions spread beyond $e = 0.2$, and show a peak at $e \simeq 0.1$ for planet e. The four planets have consistent inclinations of about $30\deg$ with respect to the sky plane, but the confidence intervals for the longitude of ascending node are disjoint for planets b and c, and we find tentative evidence for non-coplanarity between planets b and c at the $2 \sigma$ level. [less ▲]Detailed reference viewed: 24 (2 ULg) First scattered-light images of the gas-rich debris disk around 49 CetiChoquet, É.; Milli, J.; Wahhaj, Z. et alin Astrophysical Journal Letters (2017), 834(2), 12We present the first scattered-light images of the debris disk around 49 ceti, a ~40 Myr A1 main sequence star at 59 pc, famous for hosting two massive dust belts as well as large quantities of atomic and ... [more ▼]We present the first scattered-light images of the debris disk around 49 ceti, a ~40 Myr A1 main sequence star at 59 pc, famous for hosting two massive dust belts as well as large quantities of atomic and molecular gas. The outer disk is revealed in reprocessed archival Hubble Space Telescope NICMOS F110W images, as well as new coronagraphic H band images from the Very Large Telescope SPHERE instrument. The disk extends from 1.1" (65 AU) to 4.6" (250 AU), and is seen at an inclination of 73degr, which refines previous measurements at lower angular resolution. We also report no companion detection larger than 3 M_Jup at projected separations beyond 20 AU from the star (0.34"). Comparison between the F110W and H-band images is consistent with a grey color of 49 ceti's dust, indicating grains larger than >2microns. Our photometric measurements indicate a scattering efficiency / infrared excess ratio of 0.2-0.4, relatively low compared to other characterized debris disks. We find that 49 ceti presents morphological and scattering properties very similar to the gas-rich HD 131835 system. From our constraint on the disk inclination we find that the atomic gas previously detected in absorption must extend to the inner disk, and that the latter must be depleted of CO gas. Building on previous studies, we propose a schematic view of the system describing the dust and gas structure around 49 ceti and hypothetic scenarios for the gas nature and origin. [less ▲]Detailed reference viewed: 13 (1 ULg) The W. M. Keck Observatory infrared vortex coronagraph and a first image of HIP79124 BSerabyn, Eugene; Huby, Elsa ; Matthews, Keith et alin Astronomical Journal (The) (2017), 153(1), 43An optical vortex coronagraph has been implemented within the NIRC2 camera on the Keck II telescope and used to carry out on-sky tests and observations. The development of this new L'-band observational ... [more ▼]An optical vortex coronagraph has been implemented within the NIRC2 camera on the Keck II telescope and used to carry out on-sky tests and observations. The development of this new L'-band observational mode is described, and an initial demonstration of the new capability is presented: a resolved image of the low-mass companion to HIP79124, which had previously been detected by means of interferometry. With HIP79124 B at a projected separation of 186.5 mas, both the small inner working angle of the vortex coronagraph and the related imaging improvements were crucial in imaging this close companion directly. Due to higher Strehl ratios and more relaxed contrasts in L' band versus H band, this new coronagraphic capability will enable high-contrast small-angle observations of nearby young exoplanets and disks on a par with those of shorter-wavelength extreme adaptive optics coronagraphs. [less ▲]Detailed reference viewed: 22 (4 ULg) Characterization of the inner disk around HD 141569 A from Keck/NIRC2 L-band vortex coronagraphyMawet, Dimitri; Choquet, Élodie; Absil, Olivier et alin Astronomical Journal (The) (2017), 153(1), 44HD 141569 A is a pre-main sequence B9.5 Ve star surrounded by a prominent and complex circumstellar disk, likely still in a transition stage from protoplanetary to debris disk phase. Here, we present a ... [more ▼]HD 141569 A is a pre-main sequence B9.5 Ve star surrounded by a prominent and complex circumstellar disk, likely still in a transition stage from protoplanetary to debris disk phase. Here, we present a new image of the third inner disk component of HD 141569 A made in the L' band (3.8 micron) during the commissioning of the vector vortex coronagraph recently installed in the near-infrared imager and spectrograph NIRC2 behind the W.M. Keck Observatory Keck II adaptive optics system. We used reference point spread function subtraction, which reveals the innermost disk component from the inner working distance of $\simeq 23$ AU and up to $\simeq 70$ AU. The spatial scale of our detection roughly corresponds to the optical and near-infrared scattered light, thermal Q, N and 8.6 micron PAH emission reported earlier. We also see an outward progression in dust location from the L'-band to the H-band (VLT/SPHERE image) to the visible (HST/STIS image), likely indicative of dust blowout. The warm disk component is nested deep inside the two outer belts imaged by HST NICMOS in 1999 (respectively at 406 and 245 AU). We fit our new L'-band image and spectral energy distribution of HD 141569 A with the radiative transfer code MCFOST. Our best-fit models favor pure olivine grains, and are consistent with the composition of the outer belts. While our image shows a putative very-faint point-like clump or source embedded in the inner disk, we did not detect any true companion within the gap between the inner disk and the first outer ring, at a sensitivity of a few Jupiter masses. [less ▲]Detailed reference viewed: 21 (2 ULg) Discovery of a low-mass companion inside the debris ring surrounding the F5V star HD 206893Milli, J.; Hibon, P.; Christiaens, Valentin et alin Astronomy and Astrophysics (2017), 597
Aims: Uncovering the ingredients and the architecture of planetary systems is a very active field of research that has fuelled many new theories on giant planet formation, migration, composition ... [more ▼]
Aims: Uncovering the ingredients and the architecture of planetary systems is a very active field of research that has fuelled many new theories on giant planet formation, migration, composition, and interaction with the circumstellar environment. We aim at discovering and studying new such systems, to further expand our knowledge of how low-mass companions form and evolve.
Methods: We obtained high-contrast H-band images of the circumstellar environment of the F5V star HD 206893, known to host a debris disc never detected in scattered light. These observations are part of the SPHERE High Angular Resolution Debris Disc Survey (SHARDDS) using the InfraRed Dual-band Imager and Spectrograph (IRDIS) installed on VLT/SPHERE.
Results: We report the detection of a source with a contrast of 3.6 × 10[SUP]-5[/SUP] in the H-band, orbiting at a projected separation of 270 milliarcsec or 10 au, corresponding to a mass in the range 24 to 73 M[SUB]Jup[/SUB] for an age of the system in the range 0.2 to 2 Gyr. The detection was confirmed ten months later with VLT/NaCo, ruling out a background object with no proper motion. A faint extended emission compatible with the disc scattered light signal is also observed.
Conclusions: The detection of a low-mass companion inside a massive debris disc makes this system an analog of other young planetary systems such as β Pictoris, HR 8799 or HD 95086 and requires now further characterisation of both components to understand their interactions. [less ▲]Detailed reference viewed: 10 (3 ULg) The SHARDDS survey: First resolved image of the HD 114082 debris disk in the Lower Centaurus Crux with SPHEREWahhaj, Zahed; Milli, Julien; Kennedy, Grant et alin Astronomy and Astrophysics (2016), 596We present the first resolved image of the debris disk around the 16 ± 8 Myr old star, HD 114082. The observation was made in the H-band using the SPHERE instrument. The star is at a distance of 92 ± 6 pc ... [more ▼]We present the first resolved image of the debris disk around the 16 ± 8 Myr old star, HD 114082. The observation was made in the H-band using the SPHERE instrument. The star is at a distance of 92 ± 6 pc in the Lower Centaurus Crux association. Using a Markov chain Monte Carlo analysis, we determined that the debris is likely in the form of a dust ring with an inner edge of 27.7[SUP]+2.8[/SUP][SUB]-3.5[/SUB] au, position angle -74.3°[SUP]+0.5[/SUP][SUB]-1.5[/SUB], and an inclination with respect to the line of sight of 6.7°[SUP]+3.8[/SUP][SUB]-0.4[/SUB]. The disk imaged in scattered light has a surface density that is declining with radius of r[SUP]-4[/SUP], which is steeper than expected for grain blowout by radiation pressure. We find only marginal evidence (2σ) of eccentricity and rule out planets more massive than 1.0 M[SUB]Jup[/SUB] orbiting within 1 au of the inner edge of the ring, since such a planet would have disrupted the disk. The disk has roughly the same fractional disk luminosity (L[SUB]disk[/SUB]/L[SUB]∗[/SUB] = 3.3 × 10[SUP]-3[/SUP]) as HR 4796 A and β Pictoris, however it was not detected by previous instrument facilities most likely because of its small angular size (radius 0.4''), low albedo ( 0.2), and low scattering efficiency far from the star due to high scattering anisotropy. With the arrival of extreme adaptive optics systems, such as SPHERE and GPI, the morphology of smaller, fainter, and more distant debris disks are being revealed, providing clues to planet-disk interactions in young protoplanetary systems. The reduced images are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/596/L4 [less ▲]Detailed reference viewed: 5 (1 ULg) Optimizing the subwavelength grating of L-band annular groove phase masks for high coronagraphic performanceVargas Catalán, E.; Huby, Elsa ; Forsberg, P. et alin Astronomy and Astrophysics (2016), 595Context. The annular groove phase mask (AGPM) is one possible implementation of the vector vortex coronagraph, where the helical phase ramp is produced by a concentric subwavelength grating. For several ... [more ▼]Context. The annular groove phase mask (AGPM) is one possible implementation of the vector vortex coronagraph, where the helical phase ramp is produced by a concentric subwavelength grating. For several years, we have been manufacturing AGPMs by etching gratings into synthetic diamond substrates using inductively coupled plasma etching.
Aims: We aim to design, fabricate, optimize, and evaluate new L-band AGPMs that reach the highest possible coronagraphic performance, for applications in current and forthcoming infrared high-contrast imagers.
Methods: Rigorous coupled wave analysis (RCWA) is used for designing the subwavelength grating of the phase mask. Coronagraphic performance evaluation is performed on a dedicated optical test bench. The experimental results of the performance evaluation are then used to accurately determine the actual profile of the fabricated gratings, based on RCWA modeling.
Results: The AGPM coronagraphic performance is very sensitive to small errors in etch depth and grating profile. Most of the fabricated components therefore show moderate performance in terms of starlight rejection (a few 100:1 in the best cases). Here we present new processes for re-etching the fabricated components in order to optimize the parameters of the grating and hence significantly increase their coronagraphic performance. Starlight rejection up to 1000:1 is demonstrated in a broadband L filter on the coronagraphic test bench, which corresponds to a raw contrast of about 10[SUP]-5[/SUP] at two resolution elements from the star for a perfect input wave front on a circular, unobstructed aperture.
Conclusions: Thanks to their exquisite performance, our latest L-band AGPMs are good candidates for installation in state of the art and future high-contrast thermal infrared imagers, such as METIS for the E-ELT. [less ▲]Detailed reference viewed: 14 (1 ULg) A near-infrared interferometric survey of debris-disc stars. V. PIONIER search for variabilityErtel, S.; Defrere, Denis ; Absil, Olivier et alin Astronomy and Astrophysics (2016), 595Context. Extended circumstellar emission has been detected within a few 100 milli-arcsec around ≳10% of nearby main sequence stars using near-infrared interferometry. Follow-up observations using other ... [more ▼]Context. Extended circumstellar emission has been detected within a few 100 milli-arcsec around ≳10% of nearby main sequence stars using near-infrared interferometry. Follow-up observations using other techniques, should they yield similar results or non-detections, can provide strong constraints on the origin of the emission. They can also reveal the variability of the phenomenon. Aims: We aim to demonstrate the persistence of the phenomenon over the timescale of a few years and to search for variability of our previously detected excesses. Methods: Using Very Large Telescope Interferometer (VLTI)/Precision Integrated Optics Near Infrared ExpeRiment (PIONIER) in H band we have carried out multi-epoch observations of the stars for which a near-infrared excess was previously detected using the same observation technique and instrument. The detection rates and distribution of the excesses from our original survey and the follow-up observations are compared statistically. A search for variability of the excesses in our time series is carried out based on the level of the broadband excesses. Results: In 12 of 16 follow-up observations, an excess is re-detected with a significance of > 2σ, and in 7 of 16 follow-up observations significant excess (> 3σ) is re-detected. We statistically demonstrate with very high confidence that the phenomenon persists for the majority of the systems. We also present the first detection of potential variability in two sources. Conclusions: We conclude that the phenomenon responsible for the excesses persists over the timescale of a few years for the majority of the systems. However, we also find that variability intrinsic to a target can cause it to have no significant excess at the time of a specific observation. [less ▲]Detailed reference viewed: 23 (8 ULg) Exocomet signatures around the A-shell star φ Leonis?Eiroa, C.; Rebollido, I.; Montesinos, B. et alin Astronomy and Astrophysics (2016), 594We present an intensive monitoring of high-resolution spectra of the Ca ii K line in the A7IV shell star φ Leo at very short (minutes, hours), short (night to night), and medium (weeks, months) timescales ... [more ▼]We present an intensive monitoring of high-resolution spectra of the Ca ii K line in the A7IV shell star φ Leo at very short (minutes, hours), short (night to night), and medium (weeks, months) timescales. The spectra show remarkable variable absorptions on timescales of hours, days, and months. The characteristics of these sporadic events are very similar to most that are observed toward the debris disk host star β Pic, which are commonly interpreted as signs of the evaporation of solid, comet-like bodies grazing or falling onto the star. Therefore, our results suggest the presence of solid bodies around φ Leo. To our knowledge, with the exception of β Pic, our monitoring has the best time resolution at the mentioned timescales for a star with events attributed to exocomets. Assuming the cometary scenario and considering the timescales of our monitoring, our results indicate that φ Leo presents the richest environment with comet-like events known to date, second only to β Pic. [less ▲]Detailed reference viewed: 12 (2 ULg) Preliminary optical design for the common fore optics of METISAgócs, Tibor; Brandl, Bernhard R.; Jager, Rieks et alin Evans, C.; Simard, L.; Takami, H. (Eds.) Ground-based and Airborne Instrumentation for Astronomy VI (2016, August 09)METIS is the Mid-infrared E-ELT Imager and Spectrograph, which will provide outstanding observing capabilities, focusing on high angular and spectral resolution. It consists of two diffraction-limited ... [more ▼]METIS is the Mid-infrared E-ELT Imager and Spectrograph, which will provide outstanding observing capabilities, focusing on high angular and spectral resolution. It consists of two diffraction-limited imagers operating in the LM and NQ bands respectively and an IFU fed diffraction-limited high-resolution (R=100,000) LM band spectrograph. These science subsystems are preceded by the common fore optics (CFO), which provides the following essential functionalities: calibration, chopping, image de-rotation, thermal background and stray light reduction. We show the evolution of the CFO optical design from the conceptual design to the preliminary optical design, detail the optimization steps and discuss the necessary trade-offs. [less ▲]Detailed reference viewed: 10 (2 ULg) High-contrast imaging with METISKenworthy, Matthew A.; Absil, Olivier ; Agócs, Tibor et alin Evans, C.; Simard, L.; Takami, H. (Eds.) Ground-based and Airborne Instrumentation for Astronomy VI (2016, August 09)The Mid-infrared E-ELT Imager and Spectrograph (METIS) for the European Extremely Large Telescope (E-ELT) consists of diffraction-limited imagers that cover 3 to 14 microns with medium resolution (R 5000 ... [more ▼]The Mid-infrared E-ELT Imager and Spectrograph (METIS) for the European Extremely Large Telescope (E-ELT) consists of diffraction-limited imagers that cover 3 to 14 microns with medium resolution (R 5000) long slit spectroscopy, and an integral field spectrograph for high spectral resolution spectroscopy (R 100,000) over the L and M bands. One of the science cases that METIS addresses is the characterization of faint circumstellar material and exoplanet companions through imaging and spectroscopy. We present our approach for high contrast imaging with METIS, covering diffraction suppression with coronagraphs, the removal of slowly changing optical aberrations with focal plane wavefront sensing, interferometric imaging with sparse aperture masks, and observing strategies for both the imagers and IFU image slicers. [less ▲]Detailed reference viewed: 12 (0 ULg) Making high-accuracy null depth measurements for the LBTI exozodi surveyMennesson, Bertrand; Defrere, Denis ; Nowak, Matthias et alin Malbet, F.; Creech-Eakman, M.; Tuthill, P. (Eds.) Optical and Infrared Interferometry and Imaging V (2016, August 04)The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass ... [more ▼]The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of 12 zodis per star, for a representative ensemble of 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 σ measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation ("null") levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI. [less ▲]Detailed reference viewed: 14 (4 ULg) The path to interferometry in spaceRinehart, S. A.; Savini, G.; Holland, W. et alin Malbet, F.; Creech-Eakman, M.; Tuthill, P. (Eds.) Optical and Infrared Interferometry and Imaging V (2016, August 04)For over two decades, astronomers have considered the possibilities for interferometry in space. The first of these missions was the Space Interferometry Mission (SIM), but that was followed by missions ... [more ▼]For over two decades, astronomers have considered the possibilities for interferometry in space. The first of these missions was the Space Interferometry Mission (SIM), but that was followed by missions for studying exoplanets (e.g Terrestrial Planet Finder, Darwin), and then far-infrared interferometers (e.g. the Space Infrared Interferometric Telescope, the Far-Infrared Interferometer). Unfortunately, following the cancellation of SIM, the future for space-based interferometry has been in doubt, and the interferometric community needs to reevaluate the path forward. While interferometers have strong potential for scientific discovery, there are technological developments still needed, and continued maturation of techniques is important for advocacy to the broader astronomical community. We review the status of several concepts for space-based interferometry, and look for possible synergies between missions oriented towards different science goals. [less ▲]Detailed reference viewed: 12 (2 ULg) End-to-end simulations of the E-ELT/METIS coronagraphsCarlomagno, Brunella ; Absil, Olivier ; Kenworthy, Matthew et alin Marchetti, E.; Close, L.; Véran, J.-P. (Eds.) Adaptive Optics Systems V (2016, July 27)The direct detection of low-mass planets in the habitable zone of nearby stars is an important science case for future E-ELT instruments such as the mid-infrared imager and spectrograph METIS, which ... [more ▼]The direct detection of low-mass planets in the habitable zone of nearby stars is an important science case for future E-ELT instruments such as the mid-infrared imager and spectrograph METIS, which features vortex phase masks and apodizing phase plates (APP) in its baseline design. In this work, we present end-to-end performance simulations, using Fourier propagation, of several METIS coronagraphic modes, including focal-plane vortex phase masks and pupil-plane apodizing phase plates, for the centrally obscured, segmented E-ELT pupil. The atmosphere and the AO contributions are taken into account. Hybrid coronagraphs combining the advantages of vortex phase masks and APPs are considered to improve the METIS coronagraphic performance. [less ▲]Detailed reference viewed: 18 (3 ULg) Commissioning and first light results of an L'-band vortex coronagraph with the Keck II adaptive optics NIRC2 science instrumentFemenía Castellá, Bruno; Serabyn, Eugene; Mawet, Dimitri et alin Marchetti, E.; Close, L.; Véran, J.-P. (Eds.) Adaptive Optics Systems V (2016, July 26)On March 2015 an L'-band vortex coronagraph based on an Annular Groove Phase Mask made up of a diamond sub-wavelength grating was installed on NIRC2 as a demonstration project. This vortex coronagraph ... [more ▼]On March 2015 an L'-band vortex coronagraph based on an Annular Groove Phase Mask made up of a diamond sub-wavelength grating was installed on NIRC2 as a demonstration project. This vortex coronagraph operates in the L' band not only in order to take advantage from the favorable star/planet contrast ratio when observing beyond the K band, but also to exploit the fact that the Keck II Adaptive Optics (AO) system delivers nearly extreme adaptive optics image quality (Strehl ratios values near 90%) at 3.7μm. We describe the hardware installation of the vortex phase mask during a routine NIRC2 service mission. The success of the project depends on extensive software development which has allowed the achievement of exquisite real-time pointing control as well as further contrast improvements by using speckle nulling to mitigate the effect of static speckles. First light of the new coronagraphic mode was on June 2015 with already very good initial results. Subsequent commissioning nights were interlaced with science nights by members of the VORTEX team with their respective scientific programs. The new capability and excellent results so far have motivated the VORTEX team and the Keck Science Steering Committee (KSSC) to offer the new mode in shared risk mode for 2016B. [less ▲]Detailed reference viewed: 21 (2 ULg) The QACITS pointing sensor: from theory to on-sky operation on Keck/NIRC2Huby, Elsa ; Absil, Olivier ; Mawet, Dimitri et alin Marchetti, E.; Close, L.; Véran, J.-P. (Eds.) Adaptive Optics Systems V (2016, July 26)Small inner working angle coronagraphs are essential to benefit from the full potential of large and future extremely large ground-based telescopes, especially in the context of the detection and ... [more ▼]Small inner working angle coronagraphs are essential to benefit from the full potential of large and future extremely large ground-based telescopes, especially in the context of the detection and characterization of exoplanets. Among existing solutions, the vortex coronagraph stands as one of the most effective and promising solutions. However, for focal-plane coronagraph, a small inner working angle comes necessarily at the cost of a high sensitivity to pointing errors. This is the reason why a pointing control system is imperative to stabilize the star on the vortex center against pointing drifts due to mechanical flexures, that generally occur during observation due for instance to temperature and/or gravity variations. We have therefore developed a technique called QACITS[SUP]1[/SUP] (Quadrant Analysis of Coronagraphic Images for Tip-tilt Sensing), which is based on the analysis of the coronagraphic image shape to infer the amount of pointing error. It has been shown that the flux gradient in the image is directly related to the amount of tip-tilt affecting the beam. The main advantage of this technique is that it does not require any additional setup and can thus be easily implemented on all current facilities equipped with a vortex phase mask. In this paper, we focus on the implementation of the QACITS sensor at Keck/NIRC2, where an L-band AGPM has been recently commissioned (June and October 2015), successfully validating the QACITS estimator in the case of a centrally obstructed pupil. The algorithm has been designed to be easily handled by any user observing in vortex mode, which is available for science in shared risk mode since 2016B. [less ▲]Detailed reference viewed: 3 (1 ULg) | 2017-03-23 08:31:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6270627975463867, "perplexity": 5239.588173613152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186841.66/warc/CC-MAIN-20170322212946-00607-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.thestudentroom.co.uk/showthread.php?t=4344152 | You are Here: Home >< Maths
# why does incorrect solution come from step 1
1. Squaring both sides of an equation like this can result in incorrect solutions.
Consider the equation , if you square both sides you get and this is obviously not true.
2. (Original post by KloppOClock)
Squaring both sides of an equation can generate extra solutions. The reason is because the reverse process can have multiple solutions i.e.
In this example, satisfies
But if you square root both sides, only satisfies the equation if you take the negative square root of one of the sides:
3. (Original post by KloppOClock)
...
The fundamental problem is that you are applying some function to both sides of the original equation, to create a new equation. You require that the new equation and the original equation are equivalent i.e. that they have the same set of solutions.
To express this logically, you require the following:
Now the forward implication is fine , since that is true as part of the definition of a function (they have to be single-valued), but the reverse implication says that must be one-one, and some functions aren't e.g:
and so on. So you must only transform your equation using functions that are both one-one and onto (since the function must be defined for all possible values of both sides of the original), and that means that must be invertible i.e. must exist.
1. The first problem is that if you don't use an invertible function you can create extraneous solutions. Logically, you have something like this, for :
so
so if we start with the trivial equation and apply to both sides, we must write:
which has produced the extraneous solution .
We can fix this up, however, by carrying across information from the original equation into the transformed equation. For example, note that has range . We can then do, say, the following:
2. Here we need
3. Add the currently unstated range restriction:
4. Solve the equation by squaring:
where, in the final step, the logical and of and removes the extraneous solution. Note that I can use now that I've added in the range restriction, so we can see that the final equation and original equation are completely equivalent. (Note also that although there were several transformations applied to various equtations, only the first is due to a non-invertible function, so we don't have to worry about the rest.)
A common problem is when we clear the denominator of a fractional expression, since is only invertible if (since e.g. ).
So e.g. we may have to do, say, the following:
so again the original and final equations have the same set of solutions, once I've added the required restriction on the denominator as a logical condition, which ensures invertibility of all operations.
2. We can also have the problem of losing solutions when applying a function to an equation e.g.
We lost the solution by dividing away the . What's the problem here?
Well, the function that we applied to both sides here is , and that domain of that function does not include . So by dividing by , we are implicitly saying:
i.e. to allow us to divide away , we immediately have to remove one of the values that happens to be a solution to the original equation.
Updated: September 30, 2016
TSR Support Team
We have a brilliant team of more than 60 Support Team members looking after discussions on The Student Room, helping to make it a fun, safe and useful place to hang out.
This forum is supported by:
Today on TSR
### Want to get ahead during half term?
Check out these five tips.
Poll
Useful resources
### Maths Forum posting guidelines
Not sure where to post? Read the updated guidelines here
### How to use LaTex
Writing equations the easy way
### Study habits of A* students
Top tips from students who have already aced their exams | 2017-05-27 08:14:53 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8063836097717285, "perplexity": 506.7230833394987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608877.60/warc/CC-MAIN-20170527075209-20170527095209-00433.warc.gz"} |
https://bookdown.org/steve_midway/BHME/Ch3.html | # Chapter 4 Bayesian Machinery
## 4.1 Bayes’ Rule
$P(\theta|y)=\frac{P(y|\theta)\times P(\theta)}{P(y)}$ where
1. $$P(\theta|y)$$ = posterior distribution
2. $$P(y|\theta)$$ = Likelihood function
3. $$P(\theta)$$ = Prior distribution
4. $$P(y)$$ = Normalizing constant
### 4.1.1 Posterior Distribution: $$p(\theta | y)$$
The posterior distribution (often abbreviated as the posterior) is simply the way of saying the result of computing Bayes’ Rule for a set of data and parameters. Because we don’t get point estimates for answers, we correctly call it a distribution, and we add the term posterior because this is the distrution produced at the end. You can think of the posterior as a astatement about the probability of the parameter value given the data you observed.
“Reallocation of credibilities across possibilities.” - John Kruschke
### 4.1.2 Likelihood Function: $$p(y | \theta)$$
• Skip the math
• Consider it similar to other likelihood functions
• In fact, it will give you the same answer as ML estimator (interpretation differs)
## 4.2 Priors: $$p(\theta)$$
• Distribution we give to a parameter before computation
• WARNING: This is historically a big deal among statisticians, and subjectivity is a main concern cited by Frequentists
• Priors can have very little, if any, influence (e.g., diffuse, vague, non-informative, unspecified, etc), yet all priors are technically informative.
• Much of ecology uses diffuse priors, so little concern
• But priors can be practical if you really do know information (e.g., even basic information, like populations can’t be negative values)
• Simple models may not need informative priors; complex models may need priors
You may not use informative priors when starting to model. Regardless, always think about your priors, explore how they work, and be prepared to defend them to reviewers and other peers.
“So far there are only few articles in ecological journals that have actually used this asset of Bayesian statistics.” - Marc Kery (2010)
## 4.3 Normalizing Constant: $$P(y)$$
The normalizing constant is a function that converts the area under the curve to 1. While this may seem technical—and it is—this is what allows us to interpret Bayesian output probabilistically. The normalizing constant is a high dimension integral that in most cases cannot be analytically solved. But we need it, so we have to simulate it. To do this, we use Markov Chain Monte Carlo, MCMC.
### 4.3.1 MCMC Background
• Stan Ulam: Manhattan project scientist
• The solitaire problem: How do you know the chance of winning?
• Can’t really solve… too much complexity
• But we can automate a bunch of games and monitor the results—basically we can do something so much that we assume the simulations are approximating the real thing.
Fun Fact: There are 80,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000 solitaire combinations!
Markov Chain: transitions from one state to another (dependency) Monte Carlo: chance applied to the transition (randomness)
• MCMC is a group of functions, governed by specific algorithms
• Metropolis-Hastings algorithm: one of the first algorithms
• Gibbs algorithm: splits multidimensional $$\theta$$ into separate blocks, reducing dimensionality
• Consider MCMC a black box, if that’s easier
### 4.3.2 MCMC Example
A politician on a chain of islands wants to spend time on each island proportional to each island’s population.
1. After visiting one island, she needs to decide…
• stay on current island
• move to island to the west
• move to island to the east
1. But she doesn’t know overall population—can ask current islanders their population and population of adjacent islands
2. Flips a coin to decide east or west island
• if selected island has larger population, she goes
• if selected island has smaller population, she goes probabilistically
MCMC is a set of techniques to simulate draws from the posterior distribution $$p(\theta |x)$$ given a model, a likelihood $$p(x|\theta)$$, and data $$x$$, using dependent sequences of random variables. That is, MCMC yields a sample from the posterior distribution of a parameter.
### 4.3.3 Gibbs Sampling
One iteration includes as many random draws as there are parameters in the model; in other words, the chain for each parameter is updated by using the last value sampled for each of the other parameters, which is referred to as full conditional sampling.
Although the nuts and bolts of MCMC can get very detailed and may go beyond the operational knowledge you need to run models, there are some practical issues that you will need to be comfortable handling, including initial values, burn-in, convergence, thinning,
### 4.3.4 Burn-in
• Chains start with an initial value that you specify or randomize
• Initial value may not be close to true value
• This is OK, but need time for chain to find correct parameter space
• If you know your high probability region, then you may have burned in already
• Visual Assessment can confirm burn-in
### 4.3.5 Convergence
• We run multiple independent chains for stronger evidence of correct parameter space
• I When chains converge on the same space, that is strong evidence for convergence
• But how do we know or measure convergence?
• Averages of the functions may converge (chains don’t technically converge)
Convergence Diagnostics
1. Visual convergence of iterations (“hairy caterpillar” or the “grass”)
2. Visual convergence of histograms
3. Brooks-Gelman-Rubin Statistic, $$\hat{R}$$
4. Others
### 4.3.6 Thinning
MCMC chains are autocorrelated, so $$\hat{\theta_t} \sim f(\hat{\theta}_{t-1})$$. It s common practice to thin by 2, 3 or 4 to reduce autocorrelation. However, there are also arguements against thinning.
### 4.3.7 MCMC Summary
There is some artistry in MCMC, or at least some decision that need to be made by the modeler. Your final number of samples in your posterior is often much less than your total iterations, because in handling the MCMC iterations you will need to eliminate some samples (e.g., burn-in and thinning). Many MCMC adjustments you make will not result in major changes, and this is typically a good thing because it means you are in the parameter space you need to be in. Other times, you will have a model issue and some MCMC adjustment will make a (good) difference. Because computation is cheap—especially for simple models—it is common to over-do the iterations a little. This is OK. | 2020-02-28 07:07:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6281965374946594, "perplexity": 1728.687595005114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147054.34/warc/CC-MAIN-20200228043124-20200228073124-00398.warc.gz"} |
https://docs.datajoint.io/matlab/queries/06-Restriction.html | # Restriction¶
## Restriction operators & and -¶
The restriction operator A & cond selects the subset of entities from A that meet the condition cond. The exclusion operator A - cond selects the complement of restriction, i.e. the subset of entities from A that do not meet the condition cond.
Restriction and exclusion.
The condition cond may be one of the following:
• another table
• a mapping, or struct
• an expression in a character string
• a collection of conditions as a struct or cell array
• a Boolean expression (true or false)
• a query expression
As the restriction and exclusion operators are complementary, queries can be constructed using both operators that will return the same results. For example, the queries A & cond and A - Not(cond) will return the same entities.
## Restriction by a table¶
When restricting table A with another table, written A & B, the two tables must be join-compatible (see Join compatibility). The result will contain all entities from A for which there exist a matching entity in B. Exclusion of table A with table B, or A - B, will contain all entities from A for which there are no matching entities in B.
Restriction by another table.
Exclusion by another table.
### Restriction by a table with no common attributes¶
Restriction of table A with another table B having none of the same attributes as A will simply return all entities in A, unless B is empty as described below. Exclusion of table A with B having no common attributes will return no entities, unless B is empty as described below.
Restriction by a table having no common attributes.
Exclusion by a table having no common attributes.
### Restriction by an empty table¶
Restriction of table A with an empty table will return no entities regardless of whether there are any matching attributes. Exclusion of table A with an empty table will return all entities in A.
Restriction by an empty table.
Exclusion by an empty table.
## Restriction by a mapping¶
A key-value mapping may be used as an operand in restriction. For each key that is an attribute in A, the paired value is treated as part of an equality condition. Any key-value pairs without corresponding attributes in A are ignored.
Restriction by an empty mapping or by a mapping with no keys matching the attributes in A will return all the entities in A. Exclusion by an empty mapping or by a mapping with no matches will return no entities.
For example, let’s say that table Session has the attribute session_date of datatype datetime. You are interested in sessions from January 1st, 2018, so you write the following restriction query using a mapping.
ephys.Session & struct('session_dat', '2018-01-01')
Our mapping contains a typo omitting the final e from session_date, so no keys in our mapping will match any attribute in Session. As such, our query will return all of the entities of Session.
## Restriction by a string¶
Restriction can be performed when cond is an explicit condition on attribute values, expressed as a string. Such conditions may include arithmetic operations, functions, range tests, etc. Restriction of table A by a string containing an attribute not found in table A produces an error.
% All the sessions performed by Alice
ephys.Session & 'user = "Alice"'
% All the experiments at least one minute long
ephys.Experiment & 'duration >= 60'
## Restriction by a collection¶
Warning
This section documents future intended behavior in MATLAB, which is contrary to current behavior. DataJoint for MATLAB has an open issue tracking this change.
A collection can be a cell array or structure array. Cell arrays can contain collections of arbitrary restriction conditions. Structure arrays are limited to collections of mappings, each having the same attributes.
% a cell aray:
cond_cell = {'first_name = "Aaron"', 'last_name = "Aaronson"'}
% a structure array:
cond_struct = struct('first_name', 'Aaron', 'last_name', 'Paul')
cond_struct(2) = struct('first_name', 'Rosie', 'last_name', 'Aaronson')
When cond is a collection of conditions, the conditions are applied by logical disjunction (logical OR). Thus, restriction of table A by a collection will return all entities in A that meet any of the conditions in the collection. For example, if you restrict the Student table by a collection containing two conditions, one for a first and one for a last name, your query will return any students with a matching first name or a matching last name.
university.Student() & {'first_name = "Aaron"', 'last_name = "Aaronson"'}
Restriction by a collection, returning any entities matching any condition in the collection.
Restriction by an empty collection returns no entities. Exclusion of table A by an empty collection returns all the entities of A.
## Restriction by a Boolean expression¶
A & true and A - false are equivalent to A.
A & false and A - true are empty.
## Restriction by a query¶
Restriction by a query object is a generalization of restriction by a table (which is also a query object), because DataJoint queries always produce well-defined entity sets, as described in entity normalization. As such, restriction by queries follows the same behavior as restriction by tables described above.
The example below creates a query object corresponding to all the sessions performed by the user Alice. The Experiment table is then restricted by the query object, returning all the experiments that are part of sessions performed by Alice.
query = ephys.Session & 'user = "Alice"'
ephys.Experiment & query | 2019-01-16 01:53:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2510457932949066, "perplexity": 2197.540630912872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656577.40/warc/CC-MAIN-20190116011131-20190116033131-00609.warc.gz"} |
https://www.hackmath.net/en/math-problem/28131?tag_id=142 | Mixing paint with water
Mr. Adamek will paint. The purchased paint is diluted with water in a ratio of 1: 1.5.
a) how many parts of water will add to 1 part of the paint
b) how many liters of water the mission adds to 2 liters of paint
Correct result:
a = 1.5
b = 3 l
Solution:
$a=1.5=\frac{3}{2}$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Please write to us with your comment on the math problem or ask something. Thank you for helping each other - students, teachers, parents, and problem authors.
Tips to related online calculators
Check out our ratio calculator.
Tip: Our volume units converter will help you with the conversion of volume units.
Next similar math problems:
• Lowest voltage
Three resistors with resistors R1 = 10 kΩ, R2 = 20 kΩ, R3 = 30 kΩ are connected in series and an external voltage U = 30 V is connected to them. On which resistor is the lowest voltage?
• Resistance
Determine the resistance of the bulb with current 200 mA and is in regular lamp (230V).
• Copper Cu wire
Copper wire with a diameter of 1 mm and a weight of 350 g is wound on a spool. Calculate its length if the copper density is p = 8.9 g/cm cubic.
• Closed circuit
In a closed circuit, there is a voltage source with U1 = 12 V and with an internal resistance R1 = 0.2 Ω. The external resistance is R2 = 19.8 Ω. Determine the electric current and terminal voltage.
• The copper wire
The copper wire bundle with a diameter of 2.8mm has a weight of 5kg. How many meters of wire is bundled if 1m3 of copper weighs 8930kg?
• On the
On the map of Europe made at a scale of 1: 4000000, the distance between Bratislava and Paris is 28 cm. At what time an airplane flying at 800 km/h will fly this journey?
• Filament of bulb
The filament of bulb has a 1 ohm resistivity and is connected to a voltage 220 V. How much electric charge will pass through the fiber when the electric current passes for 10 seconds?
• Two resistors
Two resistors 20 Ω and 60 Ω are connected in series and an external voltage of 400 V is connected to them. What are the electrical voltages on the respective resistors? Please comment!
• Transformer
Solve the textbook problems - transformer: a) N1 = 40, N2 = 80, U2 = 80 V, U1 =? b) N1 = 400, U1 = 200 V, U2 = 50 V, N2 =?
• Candles
Before Christmas, Eva bought two cylindrical candles - red and green. Red was 1 cm longer than green. She lit a red candle on Christmas Day at 5:30 p. M. , lit a green candle at 7:00 p. M. , and left them both on fire until they burned. At 9:30 p. M. , bo
• Coil as a girl
The electrical resistance of the copper wire coil is 2.0 ohms. What current runs through the coil when the voltage between the terminals is 3.0 V?
• Resistance of the resistor
The resistor terminals have a voltage of 20 V and a current of 5 mA is passed through. What is the resistance of the resistor?
• Fog
The car started in fog at speed 30 km/h. After a 12-minute drive, the fog dissipated and the driver drove next 12 minutes distance 17 km. On the last 17 km long again the driving conditions deteriorated and the driver drove the speed of 51 km/h. a) Calcul
• Train delay
Due to a breakdown, the train lost 16 minutes of standing on the track behind Brno. He "eliminated" this delay so that after the start, the 80 km long section went at a speed 10 km/h higher than originally planned. What speed was it and what was it suppos
• Electric work
Calculate the work done by the electric forces passing the current of 0.2 A through the bulb in 10 minutes if the bulb is connected to a 230 V power supply.
• The coal
The coal stock would be enough to heat a larger room for 12 weeks, a smaller one for 18 weeks. It was heated for four weeks in both rooms, then only in a smaller one. How long was the coal stock enough?
• Cooker
A current of 2A passes through the immersion cooker at a voltage of 230V. What work do the electric field forces in 2 minutes? | 2020-08-09 00:15:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4768417179584503, "perplexity": 1647.7008478716982}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738366.27/warc/CC-MAIN-20200808224308-20200809014308-00007.warc.gz"} |
https://zbmath.org/?q=an:0762.14015 | # zbMATH — the first resource for mathematics
Local $$L$$-factors of motives and regularized determinants. (English) Zbl 0762.14015
In a previous paper by the author [Invent. Math. 104, No. 2, 245–261 (1991; Zbl 0739.14010)] it was shown, among other things, that the local Euler factors of the $$L$$-function of a motive over a number field at the archimedean (infinite) places can be written as regularized determinants. Here the non-archimedean places are treated, thus giving a unified approach to all $$L$$-factors.
Let $$K$$ be a non-archimedean local field with inertia group $$I$$ and residue field $$\kappa\simeq\mathbb F_q$$, $$q=p^f$$ for some prime $$p$$. Write $$W_K$$ and $$W_K'$$ for the associated Weil group, resp. Weil-Deligne group, and $$\text{Rep}(W_K')$$ for the Tannakian category of finite dimensional complex representations of $$W_K'$$. The group ring $$\mathbb C [\mathbb C]$$ consists of elements of the form $$\sum re^\alpha$$, where $$r,\alpha\in\mathbb C$$ and the symbols $$e^\alpha$$ obey the rule $$e^{\alpha+\alpha'}=e^\alpha e^{\alpha'}$$. $$B$$ denotes the $$\mathbb C$$-algebra $$\mathbb C[\mathbb C]$$ equipped with two structures:
(i) an unramified representation of $$W_K'$$ such that $$\rho(w)(e^\alpha)=\| w\|^\alpha\cdot e^\alpha$$ for all $$w\in W_K$$, where $$\rho$$ defines the representation of $$W_K'$$;
(ii) a $$\mathbb C$$-linear derivation $$\Theta$$ defined by $$\Theta(e^\alpha)=\alpha e^\alpha$$.
Finally, let $$\mathbb L=B^{W_K'}$$ denote the $$\mathbb C$$-algebra of Laurent polynomials in $$e^{\alpha_ q}$$ with $$\alpha_q=2\pi i/\log q$$, and let $$\Delta=\mathbb L(\Theta)$$. The additive category of left $$\Delta$$-modules which are free of finite rank over $$\mathbb L$$ is written $$\text{Der}_ \kappa$$.
One defines an additive functor $$\mathbb D:\text{Rep}(W_K')\to\text{Der}_ \kappa$$ by $$\mathbb D(H)=(H\otimes B)^{W_K'}=(H_ N\otimes B)^{\rho(W_K)}$$ with $$H_ N$$ the kernel of the nilpotent endomorphism $$N$$ occurring in the definition of the representation $$H$$ of $$W_K'$$. In general one has $$\text{rk}_\mathbb L\mathbb D(H)\leq\dim H^I_N$$, and in case of equality, the representation $$H$$ is called admissible. One obtains a full semisimple Tannakian subcategory of admissible representations $$\text{Rep}^{\text{ad}}(W_K')$$.
On the other hand, one defines an additive functor $$\mathbb H: \text{Der}_\kappa\to\text{Rep}^{\text{ad}}(W_K')$$ by $$\mathbb H(D)=(D\otimes B)^{\Theta=0}$$ with $$W_K'$$-action induced by the one on $$B$$. In general $$\dim\mathbb H(D)\leq\text{rk}_\mathbb L D$$. $$D$$ is called admissible if there is equality. $$\text{Der}_ \kappa^{\text{ad}}$$ will denote the full subcategory in $$\text{Der}_\kappa$$ of admissible objects. Several characterizations of admissibility can be given. The first result now says:
The functors $$\mathbb D$$ and $$\mathbb H$$ provide quasi- inverse equivalence of the Tannakian categories $$\text{Rep}^{\text{ad}}(W_K')$$ and $$\text{Der}_\kappa^{\text{ad}}$$, commuting with tensor products, twists and duals.
As usual, for a representation $$H$$ in $$\text{Rep}(W_ K')$$, the local $$L$$-factor is given by
$L_K(H,s) = \det(1-\rho(\Phi)q^{-s}| H^I_N)^{-1},$
where $$\Phi$$ is the geometric Frobenius. For the maximal admissible submodule $$H^{\text{ad}}$$ of $$H$$, the following result can be shown:
$L_K(H^{\text{ad}},s) = \det_\infty\left(\frac{\log q}{2\pi i}(s-\Theta)|\mathbb D(H)\right)^{-1}$
where $$\det_\infty$$ denotes the regularized determinant. Writing $$\mathcal M_K$$ for the category of (pure) Deligne motives, i.e. motives for absolute Hodge cycles over $$K$$, and fixing an embedding $$\sigma: \mathbb Q_ \ell\hookrightarrow\mathbb C$$, one has the realization functor, believed to be independent of $$\ell$$ and $$\sigma$$, $$H_{\ell,\sigma}: \mathcal M_K\to\text{Rep}(W_ K')$$, $$\ell\neq p$$, given by $$H^\bullet_{\ell,\sigma}(M)=H^ \bullet_ \ell(M)\otimes_{\mathbb Q_ \ell,\sigma}\mathbb C$$. Denote by $$\mathcal M_K^{\text{ad}}$$ the subcategory of $$\mathcal M_K$$ with objects $$M$$ such that $$H^\bullet_{\ell,\sigma}(M)\in\text{Rep}^{\text{ad}}(W_ K')$$. For $$M$$ in $${\mathcal M}_K^{\text{ad}}$$ one sets $$H^\bullet(M/\mathbb L)=\mathbb D H^\bullet_{\ell,\sigma}(M)$$. For a smooth projective variety over $$K$$ such that the associated motive $$h(X)$$ is in $${\mathcal M}_K^{\text{ad}}$$, one writes $$H^\bullet(X/\mathbb L)$$ for $$H^\bullet(h(X)/\mathbb L)$$. Also, for any $$M$$ in $${\mathcal M}_K$$, one writes $$L_ K(H^w(M),s)=L_K(H^w_{\ell,\sigma}(M),s)$$. For $$M=h(X)$$ where $$X$$ has good reduction this local factor is known to be independent of $$\ell$$ and $$\sigma$$. The following theorem is proven: For $$X$$ smooth projective over $$K$$, and $$w=0,1$$ one has
$\det_ \infty\left(\frac{\log q}{2\pi i}(s-\Theta)|\mathbb D H^ w_{\ell,\sigma}(X)\right)^{-1}=L_ K(H^ w(X),s),$
in particular, if $$X$$ has good reduction the $$\mathbb D H^ w_{\ell,\sigma}(X)$$ may be replaced by $$H^ w(X/\mathbb L)$$. The theorem is expected to hold for all $$w$$.
The theorem suggests a global cohomological approach to $$L$$-functions of varieties over number fields. Concretely, there should exist a big site containing $$\overline{\text{Spec}(\mathbb Z)}=\text{Spec}(\mathbb Z)\cup\{\infty\}$$ with suitable properties. This formalism should give rise to some remarkable expressions involving the Riemann zeta-function. One result can indeed be proven by analytic means: For $$\operatorname{Re} z>1$$ consider the Dirichlet series $$\xi(s,z)=\sum_ \rho\frac{1}{\bigl[\frac{1}{2\pi}(z-\rho)\bigr]^ s}$$, where $$\rho$$ runs over the nontrivial zeros of the Riemann zeta-function and such that $$\arg(z-\rho)\in\bigl(-\frac{\pi}{2},\frac{\pi}{2}\bigr)$$. Then $$\xi(s,z)$$ converges absolutely for $$\operatorname{Re} s>1$$ and for fixed $$x$$ it admits an analytic continuation to a holomorphic function in $$\mathbb C\backslash\{1\}$$. One has the formula:
$2^{- 1/2}(2\pi)^{-2}\pi^{-z/2}\Gamma\left(\frac{z}{2}\right)\zeta(z)z(z-1)=\exp(-(\partial_ s\xi)(0,z)).$
##### MSC:
14G10 Zeta functions and related questions in algebraic geometry (e.g., Birch-Swinnerton-Dyer conjecture) 14A20 Generalizations (algebraic spaces, stacks) 11M36 Selberg zeta functions and regularized determinants; applications to spectral theory, Dirichlet series, Eisenstein series, etc. (explicit formulas) 11M38 Zeta and $$L$$-functions in characteristic $$p$$
Full Text:
##### References:
[1] [A] Ahlfors, L.V.: Complex Analysis. New York: McGraw-Hill 1966 · Zbl 0154.31904 [2] [Ba] Barner, K.: On A. Weil’s explicit formula. J. Reine Angew. Math.323, 139-152 (1981) · Zbl 0446.12013 [3] [C-V] Cartier, P., Voros, A.: Une nouvelle interpr?tation de la formule des traces de Selberg. (Grothendieck Festschrift II) Boston Basel Stuttgart: Birkh?user 1991 [4] [D] Deninger, C.: On the ?-factors attached to motives. Invent. Math.104, 245-261 (1991) · Zbl 0739.14010 [5] [E] Erdelyi, A. et al.: Higher transcendental functions, vol. I. Bateman Manuscript project. New York: McGraw-Hill 1953 [6] [F] Fontaine, J.-M.: Modules galoisiens, modules filtr?s et anneaux de Barsotti-Tate. In: Journ?e de G?om?trie Alg?brique de Rennes (Ast?risque, vol. 65, pp. 3-80) Paris: Soc. Math. de France 1979 [7] [G] Grothendieck, A.: Mod?les de N?ron et Monodromie. Exp IX in SGA 7, I. (Lect. Notes Math., vol. 288) Berlin Heidelberg New York: Springer 1972 [8] [J1] Jannsen, U.: On thel-adic cohomology of varieties over number fields and its Galois cohomology. In: Y. Ihara, K. Ribet, J.-P. Serre (eds.) Galois Groups over ?, pp. 315-353. (Publ., Math. Sci. Res. Inst., vol. 16) Berlin Heidelberg New York: Springer 1989 [9] [J2] Jannsen, U.: Motivic cohomology,l-adic cohomology and vanishing orders ofL-functions. (Preprint 1990) [10] [K1] Kurokawa, N.: Parabolic components of zeta functions. Proc. Japan Acad, Ser. A64, 21-24 (1988) · Zbl 0642.10028 [11] [K2] Kurokawa, N.: Analyticity of Dirichlet series over prime powers. (Lect. Notes Math., vol. 1434, pp. 168-177) Berlin Heidelberg New York: Springer 1990 [12] [K3] Kurokawa, N.: Multiple zeta functions: an example. In: Zeta functions in geometry. Adv. Stud. Pure Math. 1991 (to appear) [13] [Se] Serre, J.P.: Facteurs locaux des fonctions z?ta des vari?t?s alg?briques (d?finitions et conjectures). S?minaire Delange-Pisot-Poitou, expos? 19, 1969/70 [14] [So] Soul?, Ch.: Letter to the author. February 13, 1991 [15] [Ta] Tate, J.: Number theoretical background. In: A. Borel, W. Casselman (eds.): Automorphic forms, representations andL-functions, pp. 3-26. Corvallis 1977. (Proc. Symp. Pure Math. XXXIII, 2) Providence: Am. Math. Soc. 1979 [16] [V] Voros, A.: Spectral functions and the Selberg zeta function. Commun. Math. Phys.111, 439-465 (1987) · Zbl 0631.10025
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-07-31 16:36:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9223126173019409, "perplexity": 938.5873523998697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.68/warc/CC-MAIN-20210731141123-20210731171123-00513.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/sketch-pv-diagram-work-gasduring-following-stages-draw-pv-diagram-onpaper-instructor-ask-t-q151549 | ## help with PV diagram please
Sketch a PV diagram and find the work done by the gasduring the following stages. (Draw the PV diagram onpaper. Your instructor may ask you to turn in this work.)
(a) A gas is expanded from a volume of 1.0 L to2.8 L at a constant pressure of2.7 atm.
J
(b) The gas is then cooled at constant volume until the pressurefalls to 2.1 atm.
J
(c) The gas is then compressed at a constant pressure of2.1 atm from a volume of 2.8 L to 1.0 L. [Note: Be careful of thesigns.]
J
(d) The gas is heated until its pressure increases from2.1 atm to 2.7 atm at a constant volume.
J
(e) Find the net work done during the complete cycle.
J
• a)
Amount of workdone when the gas is expanded at constant pressureis
W = PdV
= (2.7atm)(2.8L - 1.0L)
= (2.7atm)(1.8L)
=(2.7)(1.013*105Pa)(1.8*10-3m3)
b)
Amount of workdone at constant volume is zero.
Because we have dW = PdV
= P(0)
= 0
c)
Amount of workdone when the gas is compressed is
W = PdV
= (2.1atm)(1.0L - 2.8L)
=(2.1)(1.013*105Pa)(-1.8*10-3m3)
d)
In this also the process is takes place at constant volume,soworkdone is zero.
e)
Net workdone is
Wnet =(2.7)(1.013*105Pa)(1.8*10-3m3) +(2.1)(1.013*105Pa)(-1.8*10-3m3)
Do the calculation work.
Get homework help | 2013-05-22 23:59:47 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484277129173279, "perplexity": 4342.531792053349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702525329/warc/CC-MAIN-20130516110845-00095-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://openseespydoc.readthedocs.io/en/latest/src/plot_model.html | # 13.3. plot_model command¶
postprocessing.Get_Rendering.plot_model(<"nodes">, <"elements">, <Model="ModelName">)
Once the model is built, it can be visualized using this command. By default Node and element tags are not displayed. Matplotlib and Numpy are required. No analysis is required in order to visualize the model.
To visualize an OpenSees model from an existing database (using createdODB() command), the optional argument Model=”ModelName” should be used. The command will read the model data from folder ModelName_ODB.
"nodes" (str) Displays the node tags on the model. (optional) "elements" (str) Displays the element tags on the model. (optional) ModelName (str) Displays the model saved in a database named “ModelName_ODB” (optional)
Input arguments to diaplay node and element tags can be used in any combination.
plot_model()
Displays the model using data from the active OpenSeesPy model with no node and element tags on it.
plot_model("nodes")
Displays the model using data from the active OpenSeesPy model with only node tags on it.
plot_model("elements")
Displays the model using data from the active OpenSeesPy model with only element tags on it.
plot_model("nodes","elements")
Displays the model using data from the active OpenSeesPy model with both node and element tags on it.
plot_model("nodes",Model="ModelName")
Displays the model using data from an existing database “ModelName_ODB” with only node tags on it. | 2020-08-15 00:09:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22026793658733368, "perplexity": 4065.0660876097368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740343.48/warc/CC-MAIN-20200814215931-20200815005931-00169.warc.gz"} |
https://www.vcalc.com/wiki/vCollections/LM+12.2+Potential+energy%3A+energy+of+distance+or+closeness+Collection | # LM 12.2 Potential energy: energy of distance or closeness 7
Tags:
## 12.2 Potential energy: energy of distance or closeness
vCalc Companion Formulas vCalc Formulary 12.2 Potential Energy KE=1/2mv^2 Kinetic Energy v_f^2=v_i^2+2aDeltay Final Velocity Squared a=F/m Acceleration DeltaPEgrav = -F*Deltay Potential Energy U(y) = m•g•y Potential Energy
We have already seen many examples of energy related to the distance between interacting objects. When two objects participate in an attractive noncontact force, energy is required to bring them farther apart. In both of the perpetual motion machines that started off the previous chapter, one of the types of energy involved was the energy associated with the distance between the balls and the earth, which attract each other gravitationally. In the perpetual motion machine with the magnet on the pedestal, there was also energy associated with the distance between the magnet and the iron ball, which were attracting each other.
The opposite happens with repulsive forces: two socks with the same type of static electric charge will repel each other, and cannot be pushed closer together without supplying energy.
In general, the term potential energy, with algebra symbol PE, is used for the energy associated with the distance between two objects that attract or repel each other via a force that depends on the distance between them. Forces that are not determined by distance do not have potential energy associated with them. For instance, the normal force acts only between objects that have zero distance between them, and depends on other factors besides the fact that the distance is zero. There is no potential energy associated with the normal force.
c / The skater has converted all his kinetic energy into potential energy on the way up the side of the pool.
The following are some commonplace examples of potential energy:
• gravitational potential energy: The skateboarder in the photo has risen from the bottom of the pool, converting kinetic energy into gravitational potential energy. After being at rest for an instant, he will go back down, converting PE back into KE.
• magnetic potential energy: When a magnetic compass needle is allowed to rotate, the poles of the compass change their distances from the earth's north and south magnetic poles, converting magnetic potential energy into kinetic energy. (Eventually the kinetic energy is all changed into heat by friction, and the needle settles down in the position that minimizes its potential energy.)
• electrical potential energy: Socks coming out of the dryer cling together because of attractive electrical forces. Energy is required in order to separate them.
• potential energy of bending or stretching: The force between the two ends of a spring depends on the distance between them, i.e., on the length of the spring. If a car is pressed down on its shock absorbers and then released, the potential energy stored in the spring is transformed into kinetic and gravitational potential energy as the car bounces back up.
I have deliberately avoided introducing the term potential energy up until this point, because it tends to produce unfortunate connotations in the minds of students who have not yet been inoculated with a careful description of the construction of a numerical energy scale. Specifically, there is a tendency to generalize the term inappropriately to apply to any situation where there is the “potential” for something to happen: “I took a break from digging, but I had potential energy because I knew I'd be ready to work hard again in a few minutes.”
d / As the skater free-falls, his PE is converted into KE. (The numbers would be equally valid as a description of his motion on the way up.)
### An equation for gravitational potential energy
All the vital points about potential energy can be made by focusing on the example of gravitational potential energy. For simplicity, we treat only vertical motion, and motion close to the surface of the earth, where the gravitational force is nearly constant. (The generalization to the three dimensions and varying forces is more easily accomplished using the concept of work, which is the subject of the next chapter.)
To find an equation for gravitational PE, we examine the case of free fall, in which energy is transformed between kinetic energy and gravitational PE. Whatever energy is lost in one form is gained in an equal amount in the other form, so using the notation DeltaKE to stand for KE_f−KE_i and a similar notation for PE, we have
DeltaKE=−DeltaPE_(grav).
It will be convenient to refer to the object as falling, so that PE is being changed into KE, but the math applies equally well to an object slowing down on its way up. We know an equation for kinetic energy,
KE=1/2mv^2 ,
so if we can relate v to height, y, we will be able to relate DeltaPE to y, which would tell us what we want to know about potential energy. The y component of the velocity can be connected to the height via the constant acceleration equation
v_f^2=v_i^2+2aDeltay ,
and Newton's second law provides the acceleration,
a=F/m,
in terms of the gravitational force.
The algebra is simple because both equation [2] and equation [3] have velocity to the second power. Equation [2] can be solved for v^2 to give v^2=2KE"/"m, and substituting this into equation [3], we find
2(KE_f)/m=2(KE_i)/m+2aDeltay.
Making use of equations [1] and [4] gives the simple result
DeltaPEgrav = -F*Deltay. [change in gravitational PE resulting from a change in height Deltay;
F is the gravitational force on the object i.e., its weight; valid only near the surface of the earth,
where F is constant]
##### Example 1: Dropping a rock
▹ If you drop a 1-kg rock from a height of 1 m, how many joules of KE does it have on impact with the ground? (Assume that any energy transformed into heat by air friction is negligible.)
▹ If we choose they axis to point up, then F_y is negative, and equals −(1 kg)(g)=−9.8 N. A decrease in y is represented by a negative value of Deltay, Deltay=−1 m, so the change in potential energy is −(−9.8 N)(−1 m)≈−10 J. (The proof that newtons multiplied by meters give units of joules is left as a homework problem.) Conservation of energy says that the loss of this amount of PE must be accompanied by a corresponding increase in KE of 10 J.
It may be dismaying to note how many minus signs had to be handled correctly even in this relatively simple example: a total of four. Rather than depending on yourself to avoid any mistakes with signs, it is better to check whether the final result make sense physically. If it doesn't, just reverse the sign.
Although the equation for gravitational potential energy was derived by imagining a situation where it was transformed into kinetic energy, the equation can be used in any context, because all the types of energy are freely convertible into each other.
##### Example 2: Gravitational PE converted directly into heat
▹ A 50-kg firefighter slides down a 5-m pole at constant velocity. How much heat is produced?
▹ Since she slides down at constant velocity, there is no change in KE. Heat and gravitational PE are the only forms of energy that change. Ignoring plus and minus signs, the gravitational force on her body equals mg, and the amount of energy transformed is
(mg)(5 m)=2500 J.
On physical grounds, we know that there must have been an increase (positive change) in the heat energy in her hands and in the flagpole.
Here are some questions and answers about the interpretation of the equation {{equation_solver_popup equation="vCalc.Potential Energy [Force,y]" linkText="DeltaPEgrav = -F*Deltay"/} for gravitational potential energy.
Question: In a nutshell, why is there a minus sign in the equation?
Answer: It is because we increase the PE by moving the object in the opposite direction compared to the gravitational force.
Question: Why do we only get an equation for the change in potential energy? Don't I really want an equation for the potential energy itself?
Answer: No, you really don't. This relates to a basic fact about potential energy, which is that it is not a well defined quantity in the absolute sense. Only changes in potential energy are unambiguously defined. If you and I both observe a rock falling, and agree that it deposits 10 J of energy in the dirt when it hits, then we will be forced to agree that the 10 J of KE must have come from a loss of 10 joules of PE. But I might claim that it started with 37 J of PE and ended with 27, while you might swear just as truthfully that it had 109 J initially and 99 at the end. It is possible to pick some specific height as a reference level and say that the PE is zero there, but it's easier and safer just to work with changes in PE and avoid absolute PE altogether.
Question: You referred to potential energy as the energy that twoobjects have because of their distance from each other. If a rock falls, the object is the rock. Where's the other object?
Answer: Newton's third law guarantees that there will always be two objects. The other object is the planet earth.
Question: If the other object is the earth, are we talking about the distance from the rock to the center of the earth or the distance from the rock to the surface of the earth?
Answer: It doesn't matter. All that matters is the change in distance, Deltay, not y. Measuring from the earth's center or its surface are just two equally valid choices of a reference point for defining absolute PE.
Question: Which object contains the PE, the rock or the earth?
Answer: We may refer casually to the PE of the rock, but technically the PE is a relationship between the earth and the rock, and we should refer to the earth and the rock together as possessing the PE.
Question: How would this be any different for a force other than gravity?
Answer: It wouldn't. The result was derived under the assumption of constant force, but the result would be valid for any other situation where two objects interacted through a constant force. Gravity is unusual, however, in that the gravitational force on an object is so nearly constant under ordinary conditions. The magnetic force between a magnet and a refrigerator, on the other hand, changes drastically with distance. The math is a little more complex for a varying force, but the concepts are the same.
Question: Suppose a pencil is balanced on its tip and then falls over. The pencil is simultaneously changing its height and rotating, so the height change is different for different parts of the object. The bottom of the pencil doesn't lose any height at all. What do you do in this situation?
Answer: The general philosophy of energy is that an object's energy is found by adding up the energy of every little part of it. You could thus add up the changes in potential energy of all the little parts of the pencil to find the total change in potential energy. Luckily there's an easier way! The derivation of the equation for gravitational potential energy used Newton's second law, which deals with the acceleration of the object's center of mass (i.e., its balance point). If you just define Deltay as the height change of the center of mass, everything works out. A huge Ferris wheel can be rotated without putting in or taking out any PE, because its center of mass is staying at the same height.
self-check:
A ball thrown straight up will have the same speed on impact with the ground as a ball thrown straight down at the same speed. How can this be explained using potential energy?
##### Discussion Question
◊ You throw a steel ball up in the air. How can you prove based on conservation of energy that it has the same speed when it falls back into your hand? What if you throw a feather up --- is energy not conserved in this case?
#### Index Page Next Section ❱
• a = F / m (acceleration) | 2019-08-22 09:52:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6505076289176941, "perplexity": 374.24544954357015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317037.24/warc/CC-MAIN-20190822084513-20190822110513-00175.warc.gz"} |
http://gmatclub.com/forum/in-the-figure-above-the-point-on-segment-pq-that-is-twice-a-139117.html?oldest=1 | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 25 May 2016, 22:42
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# In the figure above, the point on segment PQ that is twice a
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 33000
Followers: 5753
Kudos [?]: 70492 [0], given: 9847
In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
18 Sep 2012, 02:51
Expert's post
7
This post was
BOOKMARKED
00:00
Difficulty:
15% (low)
Question Stats:
73% (02:14) correct 27% (01:19) wrong based on 768 sessions
### HideShow timer Statistics
Attachment:
Plane.png [ 7.49 KiB | Viewed 17272 times ]
In the figure above, the point on segment PQ that is twice as far from P as from Q is
(A) (3,1)
(B) (2,1)
(C) (2,-1)
(D) (1.5,0.5)
(E) (1,0)
Practice Questions
Question: 43
Page: 158
Difficulty: 600
[Reveal] Spoiler: OA
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 33000
Followers: 5753
Kudos [?]: 70492 [2] , given: 9847
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
18 Sep 2012, 02:52
2
KUDOS
Expert's post
2
This post was
BOOKMARKED
In the figure above, the point on segment PQ that is twice as far from P as from Q is
(A) (3,1)
(B) (2,1)
(C) (2,-1)
(D) (1.5,0.5)
(E) (1,0)
Options A or C cannot be correct answer since these points aren't even on segment PQ. E (1,0) is clearly closer to P, so it's also out, D is right in the middle of the segment, so only option B is left.
_________________
Manager
Joined: 25 Jun 2012
Posts: 71
Location: India
WE: General Management (Energy and Utilities)
Followers: 3
Kudos [?]: 83 [2] , given: 15
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
18 Sep 2012, 03:19
2
KUDOS
Ans (2,1)
making a graph solving it we get (2,1)
Manager
Joined: 12 Mar 2012
Posts: 171
Location: India
Concentration: Technology, General Management
GMAT Date: 07-23-2012
WE: Programming (Telecommunications)
Followers: 0
Kudos [?]: 43 [1] , given: 4
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
18 Sep 2012, 06:30
1
KUDOS
Bunuel wrote:
Attachment:
Plane.png
In the figure above, the point on segment PQ that is twice as far from P as from Q is
(A) (3,1)
(B) (2,1)
(C) (2,-1)
(D) (1.5,0.5)
(E) (1,0)
The question ask to divide the line PQ into 2:1 ratio and find the point.
By symmetry, the line segment at x-axis (1,0) will be divided in ratio 1:2.
Similarly, at (2,1) line will be divided in ration 2:1
Hence B
_________________
FOCUS..this is all I need!
Ku-Do!
Manager
Joined: 02 Jun 2011
Posts: 114
Followers: 0
Kudos [?]: 36 [1] , given: 5
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
18 Sep 2012, 10:32
1
KUDOS
Bunuel wrote:
The Official Guide for GMAT® Review, 13th Edition - Quantitative Questions Project
Attachment:
Plane.png
In the figure above, the point on segment PQ that is twice as far from P as from Q is
(A) (3,1)
(B) (2,1)
(C) (2,-1)
(D) (1.5,0.5)
(E) (1,0)
Practice Questions
Question: 43
Page: 158
Difficulty: 600
GMAT Club is introducing a new project: The Official Guide for GMAT® Review, 13th Edition - Quantitative Questions Project
Each week we'll be posting several questions from The Official Guide for GMAT® Review, 13th Edition and then after couple of days we'll provide Official Answer (OA) to them along with a slution.
We'll be glad if you participate in development of this project:
2. Please vote for the best solutions by pressing Kudos button;
3. Please vote for the questions themselves by pressing Kudos button;
4. Please share your views on difficulty level of the questions, so that we have most precise evaluation.
Thank you!
By looking at the figure it comes out that option "B" (2,1) is the right choice.
Math Expert
Joined: 02 Sep 2009
Posts: 33000
Followers: 5753
Kudos [?]: 70492 [0], given: 9847
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
21 Sep 2012, 03:18
Expert's post
In the figure above, the point on segment PQ that is twice as far from P as from Q is
(A) (3,1)
(B) (2,1)
(C) (2,-1)
(D) (1.5,0.5)
(E) (1,0)
Options A or C cannot be correct answer since these points aren't even on segment PQ. E (1,0) is clearly closer to P, so it's also out, D is right in the middle of the segment, so only option B is left.
Kudos points given to everyone with correct solution. Let me know if I missed someone.
_________________
Director
Status: Gonna rock this time!!!
Joined: 22 Jul 2012
Posts: 547
Location: India
GMAT 1: 640 Q43 V34
GMAT 2: 630 Q47 V29
WE: Information Technology (Computer Software)
Followers: 3
Kudos [?]: 49 [0], given: 562
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
26 Oct 2012, 03:25
twice as far from P as from Q
This confused me..I thought teh Qs is asking for the midpoint..
Could somebody explain me what it is asking..
and thanks bunuel for POE method. but how would u solve this algebraically?
_________________
hope is a good thing, maybe the best of things. And no good thing ever dies.
Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595
My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992
Manager
Joined: 28 May 2009
Posts: 155
Location: United States
Concentration: Strategy, General Management
GMAT Date: 03-22-2013
GPA: 3.57
WE: Information Technology (Consulting)
Followers: 5
Kudos [?]: 164 [0], given: 91
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
21 Dec 2012, 15:14
Sachin9 wrote:
twice as far from P as from Q
This confused me..I thought teh Qs is asking for the midpoint..
Could somebody explain me what it is asking..
and thanks bunuel for POE method. but how would u solve this algebraically?
I concur. I think it would be helpful if we can solve this algebraically using the given y-coordinate and point Q, rather than elimination — similar to Bunel's approach to this problem.
_________________
Director
Status: Gonna rock this time!!!
Joined: 22 Jul 2012
Posts: 547
Location: India
GMAT 1: 640 Q43 V34
GMAT 2: 630 Q47 V29
WE: Information Technology (Computer Software)
Followers: 3
Kudos [?]: 49 [0], given: 562
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
22 Dec 2012, 01:49
Please suggest a algebraic approach, Bunuel!
_________________
hope is a good thing, maybe the best of things. And no good thing ever dies.
Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595
My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992
Math Expert
Joined: 02 Sep 2009
Posts: 33000
Followers: 5753
Kudos [?]: 70492 [1] , given: 9847
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
22 Dec 2012, 05:04
1
KUDOS
Expert's post
Sachin9 wrote:
Please suggest a algebraic approach, Bunuel!
I would never solve this question algebraically, but you can check for the tools for that here: math-coordinate-geometry-87652.html
_________________
Manager
Joined: 24 Mar 2010
Posts: 81
Followers: 1
Kudos [?]: 40 [5] , given: 134
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
26 Dec 2012, 03:06
5
KUDOS
2
This post was
BOOKMARKED
Lets see the algebraic solution.
A point P that divides a line AB internally into ratio m1 & m2 is as below.
A(x1,y1)____________m1_____________P(x3,y3)________________________m2__________________________B(x2,y2)
The formula for finding coordinates are as below.
$$x3 = (x1*m2 + x2*m1)/(m1 + m2)$$
$$y3 = (y1*m2 + y2*m1)/(m1 + m2)$$
Back to our question.
P (0,-1 ) & Q (3,2)
We fathom that m1 =2 & m2 = 1
$$x1 = 0 , y1 = -1$$
$$x2 = 3 , y1 = 2$$
Plugging values, we obtain answer as (2,1). Hence B
Once you know the formula its very easy.
P.S. Bunuel's approach is beautifully elegant. However, for me solving algebraically is much faster than figuring the answer choices out.
_________________
- Stay Hungry, stay Foolish -
Director
Joined: 29 Nov 2012
Posts: 900
Followers: 12
Kudos [?]: 792 [0], given: 543
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
29 Dec 2012, 19:44
The only way to solve this problem is by plugging in answer choices?
_________________
Click +1 Kudos if my post helped...
Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/
GMAT Prep software What if scenarios gmat-prep-software-analysis-and-what-if-scenarios-146146.html
Manager
Joined: 24 Mar 2010
Posts: 81
Followers: 1
Kudos [?]: 40 [0], given: 134
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
30 Dec 2012, 01:11
fozzzy wrote:
The only way to solve this problem is by plugging in answer choices?
See this
in-the-figure-above-the-point-on-segment-pq-that-is-twice-a-139117.html#p1161061
If you need further elaboration, let me know.
_________________
- Stay Hungry, stay Foolish -
Director
Joined: 29 Nov 2012
Posts: 900
Followers: 12
Kudos [?]: 792 [0], given: 543
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
30 Dec 2012, 02:10
I was thinking along those lines by forming a triangle and then solving it. This is a fairly easy question you can still use the answer choices but if its a bit more complicated I would like a fast approach.
Attachments
Plane 2.png [ 9.82 KiB | Viewed 14301 times ]
_________________
Click +1 Kudos if my post helped...
Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/
GMAT Prep software What if scenarios gmat-prep-software-analysis-and-what-if-scenarios-146146.html
Manager
Joined: 24 Mar 2010
Posts: 81
Followers: 1
Kudos [?]: 40 [0], given: 134
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
30 Dec 2012, 02:26
fozzzy wrote:
I was thinking of using similar triangle property, I was thinking along those lines by forming a triangle and then solving it.
You can solve it easily using similar triangles too but you've picked the wrong triangles
Take the triangle with vertices [ (-1, 0) , (1,1) , (0,0) ]
and the triangle with vertices [ (3, 2) , (1,1) , (3,0) ]
These two triangles are similar since their lengths are in the ration 1:2
and then you can proceed.
_________________
- Stay Hungry, stay Foolish -
Senior Manager
Status: Prevent and prepare. Not repent and repair!!
Joined: 13 Feb 2010
Posts: 275
Location: India
Concentration: Technology, General Management
GPA: 3.75
WE: Sales (Telecommunications)
Followers: 9
Kudos [?]: 62 [0], given: 282
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
19 Jan 2013, 04:24
eaakbari wrote:
Lets see the algebraic solution.
A point P that divides a line AB internally into ratio m1 & m2 is as below.
A(x1,y1)____________m1_____________P(x3,y3)________________________m2__________________________B(x2,y2)
The formula for finding coordinates are as below.
$$x3 = (x1*m2 + x2*m1)/(m1 + m2)$$
$$y3 = (y1*m2 + y2*m1)/(m1 + m2)$$
Back to our question.
P (0,-1 ) & Q (3,2)
We fathom that m1 =2 & m2 = 1
$$x1 = 0 , y1 = -1$$
$$x2 = 3 , y1 = 2$$
Plugging values, we obtain answer as (2,1). Hence B
Once you know the formula its very easy.
P.S. Bunuel's approach is beautifully elegant. However, for me solving algebraically is much faster than figuring the answer choices out.
This is called section formula. It will be helpful. Thanks
_________________
I've failed over and over and over again in my life and that is why I succeed--Michael Jordan
Kudos drives a person to better himself every single time. So Pls give it generously
Wont give up till i hit a 700+
Intern
Joined: 16 Nov 2012
Posts: 44
Location: United States
Concentration: Operations, Social Entrepreneurship
Schools: ISB '15, NUS '16
GMAT Date: 08-27-2013
GPA: 3.46
WE: Project Management (Other)
Followers: 1
Kudos [?]: 21 [0], given: 54
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
19 Jan 2013, 10:25
http://www.teacherschoice.com.au/Maths_ ... Geom_3.htm
go through the page..
ans is (2,1)
_________________
.........................................................................................
Please give me kudos if my posts help.
Intern
Joined: 22 Jan 2012
Posts: 22
Followers: 0
Kudos [?]: 31 [3] , given: 11
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
20 Jan 2013, 02:11
3
KUDOS
I think with questions like these, the test writers are testing whether you'd quickly jump to using an algebraic approach, which in this case is much more time consuming, as compared to making the answer choices a part of your toolbox for finding the correct answer..
The question itself tells us we need to split the line into 3 equal parts with the asked coordinate being 2 parts away from P
A quick glance at the graph gives us the slope of 1, which easily shows us which points will cover the three segments
P(0,-1) --> (1,0) --> (2,1) --> Q(3,2)
Thus (2,1) being twice as far from P as from Q
True, an algebraic approach might be required for more complex problems where the slope isn't easily determined or the line segment might be split into a different ratio, but this question isn't testing that
SVP
Status: The Best Or Nothing
Joined: 27 Dec 2012
Posts: 1858
Location: India
Concentration: General Management, Technology
WE: Information Technology (Computer Software)
Followers: 35
Kudos [?]: 1517 [0], given: 193
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
06 Aug 2013, 01:41
Coordinates of point P = (0,-1)
Coordinates of point Q = (3,2)
Point required to be found is on segment PQ that is twice as far from P as from Q
So, adding (2,2) to point P (0,-1)
_________________
Kindly press "+1 Kudos" to appreciate
Manager
Joined: 10 Mar 2014
Posts: 232
Followers: 1
Kudos [?]: 55 [0], given: 13
Re: In the figure above, the point on segment PQ that is twice a [#permalink]
### Show Tags
10 Aug 2014, 23:59
We can solve this question by applying distance formula also.
distance between two points
sqrt((x2-x1) +(y2-y1))^2
Here we take options one by one
P(0,-1) q(3,2)
when we take option (2,1)
we get distance sqrt(8) from point p sqrt (2) from point q.
so this point is twice as far as from point q.
Re: In the figure above, the point on segment PQ that is twice a [#permalink] 10 Aug 2014, 23:59
Go to page 1 2 Next [ 21 posts ]
Similar topics Replies Last post
Similar
Topics:
In the figure above, if a line segment connecting points B and D is pe 3 03 Jan 2016, 13:35
2 In the figure above, point O is the center of the semicircle, and PQ 4 27 Dec 2015, 10:59
7 In the figure above, point O is the center of the semicircle, and PQ 5 24 Sep 2015, 22:42
8 In the figure above, the point on segment JK that is four times as far 7 10 Mar 2015, 06:13
20 In the figure above, how many of the points on line segment 9 03 Feb 2011, 12:18
Display posts from previous: Sort by | 2016-05-26 05:42:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7394064664840698, "perplexity": 5469.98548069049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275645.9/warc/CC-MAIN-20160524002115-00084-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://www.researcher-app.com/paper/1643251 | 3 years ago
Deltas, extended odd holes and their blockers
Publication date: Available online 14 November 2018
Source: Journal of Combinatorial Theory, Series B
Let $C$ be a clutter over ground set V where no element is contained in every member. We prove that if there is a $w∈R+V$ such that every member has weight greater than half the weight of V, then there must be a delta or the blocker of an extended odd hole minor. The proof of this result relies on a tool developed for finding delta or extended odd hole minors. | 2022-09-25 20:09:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7053731083869934, "perplexity": 371.05895461392845}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00302.warc.gz"} |
https://tex.stackexchange.com/questions/125009/find-optimal-kerning-for-space-between-inline-math-and-text | # Find optimal kerning for space between inline math and text
When I write something like
$\mathrm{C}^*$-algebra
for a C*-algebra, the space between * and - is too large. How do I change this? Are there general criteria on how much such space should be shrinked?
• You can manually insert \! in mathmode for a small negative space. – Steven B. Segletes Jul 22 '13 at 14:00
• Thanks! How do I know this is the "right" amount of negative space? (Or do I have to estimate on my own?) – Deniz Jul 22 '13 at 14:07
• It's subjective in a case like this... just be consistent. – Steven B. Segletes Jul 22 '13 at 14:13
Here's an analysis of what happens:
\documentclass[10pt]{article}
\usepackage{tabularx}
\usepackage[table]{xcolor}
\setlength{\fboxrule}{0.1pt}
\setlength{\fboxsep}{-\fboxrule}
\begin{document}
$\mathrm{C}^*$-algebra
\fbox{$\mathrm{C}^*$}-algebra
\fbox{$\mathrm{C}^{\fbox{$\scriptstyle*$}}$}-algebra
\begin{tabular}{@{}ll}
\fbox{$\mathrm{C}^*$}\kern-.1ex-algebra &
$\mathrm{C}^*$\kern-.1ex-algebra \\
\fbox{$\mathrm{C}^*$}\kern-.15ex-algebra &
$\mathrm{C}^*$\kern-.15ex-algebra \\
\fbox{$\mathrm{C}^*$}\kern-.2ex-algebra &
$\mathrm{C}^*$\kern-.2ex-algebra \\
\fbox{$\mathrm{C}^*$}\kern-.25ex-algebra &
$\mathrm{C}^*$\kern-.25ex-algebra \\
\fbox{$\mathrm{C}^*$}\kern-.3ex-algebra &
$\mathrm{C}^*$\kern-.3ex-algebra \\
\end{tabular}
\end{document}
The first three rows show the normal typesetting, which I don't find really bad, but it's subjective. The following rows show the same with increasing amount of (negative) kerning; on the left you can see the relation of the hyphen with the bounding box of “C*”. When you have decided the right amount, do
\newcommand{\csalg}{$\mathrm{C}^*$\kern-.1ex-algebra}
and type \csalg{} when you want to use the term. Choose the name you prefer, of course. It's probably worthy defining it even if you eventually decide for no kerning.
• Has there ever been attempts to add automatic kerning between math and text? I especially don’t like the looks of $K$-algebra and vector space over~$K$.. Especially the extra space to the between the K and the dot is annoying to look at. The only automatic solution I have found so far is to make a callback in LuaTeX to test for specific combinations. – Gaussler Jun 1 '16 at 18:18
• @Gaussler TeX doesn't provide anything for this. – egreg Jun 1 '16 at 18:24
This answer presents two other ideas:
## Protruding value of package microtype
The TeX engine pdfteX supports a feature "character protrusion". Certain characters (e.g. -, ., ,) are allowed to move into the margins. This can improve the visual smoothness of the margins.
Macro \leftprotrude grabs the next character and looks at its value for the protrusion into the left margin. \lpcode<font><character slot> expands to an integer number, the unit is a per mill of 1em.
Package microtype enables character protrusion and configures some values for the protrusion (they are font-dependent).
## Vertical position of the star
For fun, the example also shows some alternatives:
• Text mode: \textsuperscript{*}
• Math mode: star set as limit of an operator (\mathop).
## Example
\documentclass{article}
\usepackage{microtype}
\newcommand*{\leftprotrude}[1]{%
\begingroup
\leavevmode
\kern-.001\dimexpr\numexpr(\lpcode\font#1)em\relax
\endgroup
#1%
}
\newcommand*{\CalgA}[1]{%
\mbox{$\mathrm{C}^{*}$#1-algebra}%
}
\newcommand*{\CalgB}[1]{%
\mbox{$\mathop{\mathrm{C}}\nolimits^{*}$#1-algebra}%
}
\newcommand*{\CalgC}[1]{%
\mbox{C\textsuperscript{*}#1-algebra}%
}
\begin{document}
\begin{tabular}{l@{}|l@{}|l@{}|l}
\CalgA{} & \CalgB{} & \CalgC{} & \small(unmodified) \\
\CalgA{\leftprotrude} & \CalgB{\leftprotrude} & \CalgC{\leftprotrude}
& \small\verb|\leftprotrude| \\
\CalgA{\negthinspace} & \CalgB{\negthinspace} & \CalgC{\negthinspace}
& \small\verb|\negthinspace| \\
\end{tabular}
\end{document}
` | 2020-04-04 13:30:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293650150299072, "perplexity": 3677.016824165138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521876.48/warc/CC-MAIN-20200404103932-20200404133932-00276.warc.gz"} |
http://ex-ample.blogspot.com/2008/07/mesh-lab-alignment-registration.html | ## Thursday, July 24, 2008
### mesh lab alignment registration
First of all you should have well understood the layers mechanism in MeshLab and the fact that each mesh can have an transformation matrix. The alignment process simply modify the transformation of each layer.
The main idea is that you iteratively glue your misaligned meshes over the already aligned ones. A mesh that is aligned toghether with a set already aligned mesh is said being Glued (a * is shown near to its name). Initially all the meshes are 'unglued'. You task is to roughly align all your meshes.
Once the mesh are glued in a rather good initial position you can start the Alignment Process: the system chooses what meshes have some overlapping part and for each pair of meshes the system starts a ICP alignment algorithm that precisely align the chosen pair. At the end of the process all the glued meshes will hopefully be aligned toghether.
Key Concepts:
• ICP: Iterated closed point: The basic algorithm that automatically precisely align a moving mesh M onto a fixed one F. The main idea is that we choose a set of (well distributed) points over M and we search on F the corresponding nearest points. These pairs are used to find the best rigid transformation that bring the points of M onto their corresponding on F. ICP has a lot of tunable parameters.
• Global Alignment: also known as multiview registration. A final step that evenly distributes the alignment error among all the alignments in order to avoid the biased accumulation of error.
• Absolute Scale. The parameters of the alignment tool are in absolute units, and the defaults are ok for a standard scanner outputting meshes in millimeter units. So for example the target error (e.g. the error that the ICP try to achieve, is 0.05 mm something that can be achieved with a good scanner. Obviously if your range maps are in a different unit (microns, kilometers, ... ) you have to adjust the default alignment parameters, otherwise the alignment process will fail.
Ref.
http://meshlab.sourceforge.net/wiki/index.php/Alignment | 2017-12-15 14:04:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7732937335968018, "perplexity": 1444.3939655133502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948572676.65/warc/CC-MAIN-20171215133912-20171215155912-00691.warc.gz"} |
http://www.blogabout.cloud/page/8/ | ## Is Get-ADUser a bit slow in getting required result? Hello ADSISearcher using PowerShell.
Sometimes Get-ADUser just isn’t enough if you are working thousands upon thousands of AD Objects. In a recent scenario, while resolving an Active Directory Health issue. I needed the ability to be able to compare AD Objects from 2 Active Directory Domains from within a resource forest.
ADSISearcher is a command line driven LDAP Lookup procedure has the ability to query Active Directory. As ADSISearcher looks up Active Directory it enables a faster discovery of the required AD Objects.
#### My scenario
I need to ensure CustomAttribute10 in Child1.domain.com matches CustomAttribute10 in Child2.domain.com, yes I could use Get-ADUser | export-csv but this has proved to take to long and needed a faster solution.
ADSISearcher has proved to reduce the time required to execute this script and dumping out to a transcript file with “,” separating the text allows the information to be imported to excel if required.
#### The script
Clear-Host
Write-Host "You are currently running Version 1.0" -BackgroundColor DarkGray
[string] $Menu = @' ┌─────────────────────────────────────────────────────────────┐ ADSISearcher for CustomAttribute10 Created by @thewatchernode └─────────────────────────────────────────────────────────────┘ '@ Menu$Menu
Transcript
Start-Transcript -Path "$env:userprofile\Desktop\Child1vsChild2.txt" Start Time$start = [datetime]::Now
region Client Array
$Child1LDAPFilter = '(objectclass=user)'$PageSize = 1000
$Child1DN = 'DC=child1,DC=domain,DC=com'$Child1SB = 'DC=child1,DC=domain,DC=com'
$Child1Searcher = [ADSISearcher]('{0}' -f$child1LDAPFilter)
$Child1Searcher.SearchRoot = [ADSI]('GC://{0}' -f$Child1SB)
$Child1Searcher.SearchRoot = [ADSI]('GC://{0}' -f$child1DN)
$Child1Searcher.PageSize =$PageSize
$Child1Objects =$Child1Searcher.FindAll()
endregion
region Collab Array
$Child2SB = 'DC=child2,DC=domain,DC=com'$Child2DN = 'DC=child2,DC=domain,DC=com'
endregion
region Client vs Collab
Foreach($Object in$child1Objects){
$childca10 =$Object.Properties.'customattribute10'
$Child2LDAPFilter = "(objectclass=user,customattribute10=$childca10)"
$child2Searcher1 = [ADSISearcher]("{0}" -f$child2LDAPFilter)
$child2Searcher1.SearchRoot = [ADSI]("GC://{0}" -f$Child2SB)
$child2Searcher1.SearchRoot = [ADSI]("GC://{0}" -f$Child2DN)
$child2Searcher1.PageSize =$PageSize
#$AllObjects1 =$collabSearcher1.FindAll()
$nullvalue =$object.Properties.'customattribute10'
if ($nullvalue -eq$null)
{
Write-Host 'INFO, Null Value Found in Child Domain 1,' $Object.Properties.samaccountname -BackgroundColor Red } else { try { ($Object.Properties.'customattribute10' -eq $child2searcher1.Properties.'customattribute10') Write-Host 'Skipping, Attribute match found in Child domain 2 using Child domain 1,'$Object.Properties.samaccountname -ForegroundColor Green
}
catch
{
Write-Host 'INFO, No Attribute match found in Child domain 2 using Child domain 1,' $Object.Properties.samaccountname -BackgroundColor Red } } } endregion Stop Transcript Stop-Transcript End Time$end = [datetime]::Now
$resultTime =$end - $start Write-Host ('Execution : {0}Days:{1}Hr:{2}Min:{3}Sec' -f$resultTime.Days, $resultTime.Hours,$resultTime.Minutes, $resultTime.Seconds) #### Download Get-ADSISearcher (64 downloads) Regards The Author – Blogabout.Cloud ## QuickTip: PowerShell scripting – How long did it take to run the script? Have you ever wondered how long it took to run your script? Well, you dont need to wonder anymore. The following couple of lines will provide a visual output how long it take to execute your script from start to finish$Start = [system.datetime]::Now
{
Script run….
}
$End = [system.datetime]::Now$resulttime = $End –$Start
Write-Host: (‘Execution Time : {0}Days:{1}Hours:{2}Minutes:{3}Seconds’ -f $Resulttime.Hours,$Resulttime.Days, $Resulttime.Minutes,$Resulttime.Seconds)
Regards
## Discovering Distribution Lists using PowerShell
Do you have a requirement to understand how many Distribution Lists exist within your Exchange organization or need to understand if they actually being utilized? Well, this is something I have come across recently while working for the customer. They have a mass of distribution lists across their organization which they are trying to tidy up before migrating to Office 365. The organisation I was working for had over 100,000 distributions list but the state of them was unknown so what challenges did I face?
#### The challenges faced
• Unknown the number of DLs that had 0 members
• Unknown the number of DLs that had 0 managers
• Unknown the number of DLs that had invalid characters
#### The solution… PowerShell
So the following script was created to obtain the all the attributes listed before, this enable to put together a business case for which distribution lists should be deleted and which should be migrated.
• Distribution List Name
• SamAccountName
• GroupType
• DistinguishedName
• Managedby
• memberdepartrestriction
• memberjoinrestriction
• Number of Members
[CmdletBinding()]
param()
# Call Distribution Lists
$dist = @(Get-DistributionGroup -resultsize unlimited) # Start Transcript Start-Transcript -Path$env:USERPROFILE\desktop\transcript.txt
# Report on Distribution List
foreach ($dl in$dist)
{
$count =@(Get-DistributionGroup$dl.samaccountname).count
$report = New-Object -TypeName PSObject$report | Add-Member -MemberType NoteProperty -Name 'Group Name' -Value $dl.Name$report | Add-Member -MemberType NoteProperty -Name 'samAccountname' -Value $dl.samaacountname$report | Add-Member -MemberType NoteProperty -Name 'Group Type' -Value $dl.grouptype$report | Add-Member -MemberType NoteProperty -Name 'DN' -Value $dl.distinguishedName$report | Add-Member -MemberType NoteProperty -Name 'Manager' -Value $dl.managedby$report | Add-Member -MemberType NoteProperty -Name 'Member Depart Restriction' -Value $dl.memberdepartrestriction$report | Add-Member -MemberType NoteProperty -Name 'Member Join Restriction' -Value $dl.memberjoinrestriction$report | Add-Member -MemberType NoteProperty -Name 'PrimarySMTPAddress' -Value $dl.primartysmtpaddress$report | Add-Member -MemberType NoteProperty -Name 'Number of Members' -Value $count Write-Host ('INFO: {0} has {1} members' -f$dl.name, ($count))$reportoutput += $report } # Stop Transcript Stop-Transcript # Report$reportoutput | Export-Csv -Path $env:USERPROFILE\desktop\DistributionListReport.csv -NoTypeInformation -Encoding UTF8 Regards The Author – Blogabout.Cloud ## Goodbye OneNote 2016 from Office Portal Back in September 2018, Microsoft announced it would be removing OneNote from its Office installation and OneNote for Windows 10 will be the default going forward. Microsoft has now announced (12th Feb) that OneNote 2016 will be removed from the Office Portal for installation using Semi-Annual channel. So all installations from this post forward will not include OneNote 2016 by default when a user on the Semi-Annual channel using Office 365 on Windows 10 from the Office Portal. ## So what now? OneNote is available to download from the following url it is important to note that Microsoft are no longer developing new features for OneNote 2016. If you want to take advantage of the latest that OneNote has to offer, Microsoft state you should consider switching to OneNote for Windows 10 Regards The Author – Blogabout.Cloud ## Big News: Microsoft Teams being rolled out with Office 365 ProPlus (CDN) Finally, Microsoft Teams will be introduce into Office 365 ProPlus… Microsoft Teams will be introduced into the; • February Monthly Channel; the • March Semi-Annual Channel Targeted (SAC-T); and • July Semi-Annual Channel (SAC) But will automatically install Teams when Office 365 ProPlus is installed on new PCs and Macs. Now the million dollar question, how is it being introduced? As a Consultant that has delivered and spoke about Office ProPlus for a number of years, I do have concerns about how it’s going to be introduced and here’s why. Office 365 uses the (CDN) Content Delivery Network for providing updates to all the Office ProPlus products, is it not the case with Microsoft Teams. The update mechanism is completely different as the client is delivered by good old MSI so this will bring a number of questions and challenges to start; • What version of Office am I running? • What version of Teams am I running? • Does Teams need updating? • I have this weird problem but my colleague doesnt, is it version related? • etc.. etc… However, if they integrate Microsoft Teams into the CDN it is definitely the way forward and also allows the customer to exclude Teams in the configuration.xml (if this is a requirement). It is very early days and I am sure more information will be released in due case but until then I am looking forward to see what the future holds as Microsoft have stated Teams will automatically be installed for users who already have Office 365 ProPlus in the future.” So if you’re not using Microsoft Teams today Microsoft are making damn sure its available to increase adoption. The Author – Blogabout.Cloud ## MS-200: Planning and Configuring a Messaging Platform – Study Guide Planning on taking the MS-200 Exam but don’t know where to start with your studying? Well do not fear I am in the same boat and looking for the best way to study the required elements to pass MS-200. I have started building a list of all the elements which might be covered in the exam and will continue to update this page until all the things we need know are covered. If you have any suggestions, please leave a comment below. #### Manage Modern Messaging Infrastructure (45-50%) #### Manage Mail Flow Topology (35-40%) #### Manage Recipient and Devices (15-20%) Regards The Author – Blogabout.Cloud ## Merging Excel files using PowerShell, yes it can be done. Have you ever worked with Excel files where you wanted to match and compare Columns/Rows? In the past, this has been quite difficult tasks to achieve using the native commands within PowerShell. So have you heard of the PowerShell module ImportExcel? It’s is a PowerShell module that is available on the PowerShell Gallery and introduces a number of functions that allow you to work with Excel files using the good old blue background. From this module we will be working with the following function; • Merge-Worksheet Syntax Merge-Worksheet [-Referencefile] [-Differencefile] [[-WorksheetName] ] [-Startrow ] -Headername [[-OutputFile] ] [[-OutputSheetName] ] [-Property ] [-ExcludeProperty ] [-Key ] [-KeyFontColor ] [-ChangeBackgroundColor ] [-DeleteBackgroundColor ] [-AddBackgroundColor ] [-HideEqual] [-Passthru] [-Show] [-WhatIf] [-Confirm] [] ## Example usage of Function The below shows the Reference and Difference Excel files that are being used in this example. I am going to merge the two excel files based on Column A the EmployeeNumber. During my testing, I have had issues using -HeaderName parameter. In this post I will not be specifying the headings and just modify the output file. # Variables$ref = “$env:USERPROFILE\desktop\test\ref.xlsx”$dif = “$env:USERPROFILE\desktop\test\dif.xlsx”$out = “$env:USERPROFILE\desktop\test\out.xlsx” # Script Block Merge-Worksheet -Referencefile$ref -Differencefile $dif -OutputFile$out -WorksheetName Sheet1 -Startrow 1 -OutputSheetName Sheet1 -NoHeader
As we can see from below, the output field has organised Column A and aligned the rows
Every useful if you are working with Excel files but only annoying thing is the HeaderName parameter not working.
Regards
## Counting Exchange/Exchange Online Mailboxes with a specified SMTP Domain
When working with large organisations that have multiple SMTP Domains, you may run into a requirement where you need to know. How many mailboxes have blogabout.cloud as their PrimarySMTPAddress or have blogabout.cloud listed as their EmailAddress.
Using the below PowerShell snippet you can find out exactly
get-mailbox -resultsize unlimited | where {$_.primarysmtpaddress -like "*@blogabout.cloud"} | Measure-Object # Email Address get-mailbox -resultsize unlimited | where {$_.emailaddress -like "*@blogabout.cloud"} | Measure-Object
Regards,
## Working with Active Directory Attributes with multi-values.
It is common for organisations to use or create Active Directory Attributes that may contain multiple different values and when trying to obtain the information using PowerShell you might receive
Which isn’t helpful to man or beast. However, I have been recently working with custom attributes so its time to share my experiences once again. In this post I will be working with information that is located within my personal lab, where I have customattribute10 defined with O365.
# Command
Get-ADUser -Properties * -Filter * | Select-Object samaccountname,customattribute10 | export-csv -Path $env:USERPROFILE\desktop\test1.csv As you can see that from the above I am not receiving the desired output from Get-ADUser. So lets use a PowerShell string that obtains the required information Let’s discuss the below string in detail to explain what each part does @{name=” customattribute10 ”;expression={$_. customattribute10}}
The @ symbol, is the property you are retrieving is an array, which means it contains multiple values. Then you gave the property a name/label (you can name it anything you like). This will be the header of the column in the CSV file
@{name=” customattribute10 ”;
Then you provide an expression; this is the script block where you tell the PowerShell cmdlet what you are trying to fetch. For example; we want to fetch the values for the customattribute10 attribute.
expression={$_. customattribute10}} So, now we understand the require array to pull the multi-values from lets execute the below command # Command Get-ADUser -Filter * -Properties proxyaddresses,customattribute10 | select samaccountname, @{L='customAttribute10'; E={$_.customAttribute10}} | Export-Csv -Path $env:USERPROFILE\desktop\test.csv Now executing this command you will receive the correct output from the attribute which you desired. Regards The Author – Blogabout.Cloud ## Working with Active Directory using Get-ADUsers When working with Active Directory Users sometimes its a lot easier using PowerShell to obtain all the information you require from your environment. As a Consultant I have lost count how many times I’ve used PowerShell to get information out of Active Directory and its essential to your skill set. The most simple and effective way by running the following command, as it will dump all Active Directory Users and their properties to a CSV file located on your desktop # Command Get-ADUser -Filter * -Properties * | Export-CSV$env:userprofile\desktop\ADExport.csv
or
# Command
Get-ADUser -Filter * | Export-CSV $env:userprofile\desktop\ADExport.csv What if you only require bits of information? The command only targets the Name and SamAccountName Field. Simple right? # Command Get-ADUSer -Filter * -Properties Name,SamAccountName | Export-CSV$env:userprofile\desktop\ADExport.csv
or
# Command
Get-ADUSer -Filter * -Properties * | Select-Object -Property Name,SamAccountName | Export-CSV \$env:userprofile\desktop\ADExport.csv
The possibilities are endless, you can call all everything from the below table because it exists on the AD object by default. If you have used ExtensionAttributes or CustomAttributes you can also call these as well by adding them to your filter. | 2020-01-26 05:31:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38126465678215027, "perplexity": 14542.44648894396}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687725.76/warc/CC-MAIN-20200126043644-20200126073644-00130.warc.gz"} |
https://alltopicall.com/tag/degenerate/ | ## Degenerate States in Quantum Mechanics
In his book on quantum mechanics in the chapter on perturbation theory Dirac says in a footnote:
A system with only one stationary state belonging to each energy-level is often called non-degenerate and one with two or more stationary states belonging to an energy-level is called degenerate, although these words are not very appropriate from the modern point of view.
1) Why did Dirac deem the terms (non-)degenerate inappropriate?
2) Why do we, with our even more modern point of view, still use them?
## Stability of a Degenerate Equilibrium Point in a Planar ODE
Consider the planar ODE
$$dot x_1 = x_2$$
$$dot x_2 = – x_1^2 – 2 x_1 – 1$$
Obliviously, $$(x_1,x_2)=(-1,0)$$ is an equilibrium point. The Jacobian matrix at this point is
$$J = begin{bmatrix} 0 & 1 \ 0 & 0 end{bmatrix}$$
Thus, linearizarion fails in determining the stability. How can we determine the stability of this equilibrium point?
## Degenerate States in Quantum Mechanics
In his book on quantum mechanics in the chapter on perturbation theory Dirac says in a footnote:
A system with only one stationary state belonging to each energy-level is often called non-degenerate and one with two or more stationary states belonging to an energy-level is called degenerate, although these words are not very appropriate from the modern point of view.
1) Why did Dirac deem the terms (non-)degenerate inappropriate?
2) Why do we, with our even more modern point of view, still use them?
## Showing that a 2-form on an odd dimensional space is not degenerate
On an odd-dimensional space $$mathbb R^{2n+1}$$ with coordinates $$x_1…x_n;y_1…y_n;t$$ consider the following 2-form:
$$domega=sum dx_i land dy_i-omega land dt$$
where $$omega$$ is any 1-form on $$mathbb R^{2n+1}$$.
How to show that $$domega$$ is non degenerate?
## computation of an integral for 2nd order non degenerate perturbation theory
I am given that the potential of a diatomic molecule is equal to $$V(rho)=-2V left ( frac{1}{rho ^2}-frac{1}{2 rho ^2} right )$$ With $$rho=r/a$$ is a dimesionless coordinate, and $$r$$ is the separation distance between the two atoms. I found the first order corrections without issue, but I am stuck on finding the second order one. I know that
$$E_n^2=sum_{mneq n}frac{|langlepsi_m^0|H’|psi_n^0rangle|^2}{E_n^0-E_m^0}$$
For the given problem, I also found that the wavefunctions are $$psi=e^{-x^2/2} H_{n}(x)$$ (hermite polynomials). My problem is in computing the integral of the inner product, i.e, computing
$$langlepsi_m^0|H’|psi_n^0rangle=int_{-infty}^{infty} x^3 e^{-x^2} H_{n}(x)H_{m}(x)dx$$
I could apply integration by parts a bazillion times to obtain the answer, but it is far too tedious. From reading Griffiths, I know that there is a way to do this much simpler with dirac notation and ladder operators, in conjunction with the usual ladder operator identities, but I am unsure on how to go about this.
## What is \$mathrm{O}_q/mathrm{SO}_q\$ if \$q\$ is a quadratic \$mathbb{Z}\$-form which is degenerate?
Any binary quadratic $$mathbb{Z}$$-form $$q$$ induces a symmetric bilinear form
$$B_q(u,v) = q(u+v) – q(u) -q(v) forall u,v inmathbb{Z}^2$$
and it is considered non-degenerate (over $$mathbb{Z}$$) if its discriminant
$$text{disc}(q) := det(B_q(e_i,e_j)_{1 leq i,j leq 2})$$
where $$e_1 = (1,0)$$ and $$e_2 = (0,1)$$ is invertible in $$mathbb{Z}$$, i.e., equals $$pm 1$$: see (2.1) in: http://math.stanford.edu/~conrad/papers/redgpZsmf.pdf .
Suppose $$q$$ is degenerate, but still $$text{disc}(q) neq 0$$ (so it is non-degenerate over $$mathbb{Q}$$). So its special orthogonal group scheme $$SO_q$$ defined over $$text{Spec} mathbb{Z}$$, does not have to be smooth, but it is flat as $$mathbb{Z}$$ is Dedekind (loc. sit. Definition 2.8 and right after), and it is closed in the full orthogonal group $$O_q$$, whence the quotient $$Q:=O_q/SO_q$$ is representable.
My question is: Is $$Q$$ a finite group of order $$2$$ over $$text{Spec} mathbb{Z}$$ ?
Apparently, when applied to any integral domain $$R$$ which is an extension of $$mathbb{Z}$$, the elements of $$O_q(R)$$, in some matrix realization, must be of $$det pm 1$$, so we could think of $$Q$$ as a $$mathbb{Z}$$-group of order $$2$$, but as a functor of points, $$O_q$$ can be applied to any $$mathbb{Z}$$-algebra $$R$$, for which we may find elements of $$O_q(R)$$ which are not of $$det = pm 1$$.
For example, let $$q(x,y)=x^2+y^2$$. One can verify it is degenerate.
We get $$SO_q = text{Spec} mathbb{Z}[x,y]/(x^2+y^2-1)$$. Consider its matrix realization $$left { A=left( begin{array}{cc} x & -y \ y & x \ end{array}right): det(A)=1 right }.$$
Then the component of $$det = -1$$ elements in $$O_q$$ is obtained by $$text{diag}(1,-1)SO_q$$.
So apparently, $$Q = mu_2$$ (which unlike the other order $$2$$ group $$(mathbb{Z}/2)_mathbb{Z}$$, it has a double point at the reduction at $$(2)$$, not two distinct ones).
So far everything is good. But $$A=text{diag}(3,1)$$ belongs to $$O_q(R)$$ where $$R=mathbb{Z}/8$$ (as $$A^T cdot A = I_2$$ in $$R$$, where $$I_2$$ represents $$q$$), but $$det(A)neq pm 1$$ in $$R$$ !
Does it mean that $$O_q$$ is more than these two connected components ?
I thought to avoid this problem by considering $$O_q$$ and $$SO_q$$ as flat sheaves (in the small site of flat extensions of $$mathbb{Z}$$, since what I really care is of $$H^1_text{fppf}(mathbb{Z},O_q)$$), but we may still find extensions such as $$mathbb{Z} times mathbb{Z}$$ containing a square root of unity other than $$-1$$ ?!
Thank you !
Rony
## Degenerate States in Quantum Mechanics
In his book on quantum mechanics in the chapter on perturbation theory Dirac says in a footnote:
A system with only one stationary state belonging to each energy-level is often called non-degenerate and one with two or more stationary states belonging to an energy-level is called degenerate, although these words are not very appropriate from the modern point of view.
1) Why did Dirac deem the terms (non-)degenerate inappropriate?
2) Why do we, with our even more modern point of view, still use them?
## Degenerate States in Quantum Mechanics
In his book on quantum mechanics in the chapter on perturbation theory Dirac says in a footnote:
A system with only one stationary state belonging to each energy-level is often called non-degenerate and one with two or more stationary states belonging to an energy-level is called degenerate, although these words are not very appropriate from the modern point of view.
1) Why did Dirac deem the terms (non-)degenerate inappropriate?
2) Why do we, with our even more modern point of view, still use them?
## Degenerate States in Quantum Mechanics
In his book on quantum mechanics in the chapter on perturbation theory Dirac says in a footnote:
A system with only one stationary state belonging to each energy-level is often called non-degenerate and one with two or more stationary states belonging to an energy-level is called degenerate, although these words are not very appropriate from the modern point of view.
1) Why did Dirac deem the terms (non-)degenerate inappropriate?
2) Why do we, with our even more modern point of view, still use them?
## Neuton star (or any degenerate matter) and length contraction
I know there are plenty of questions abound on the internet that go something like this:
“If an object were to move fast enough, would it collapse into a black hole”?
And the (correct) answer that is often given in response: “No, that would be an example of relativistic mass, and it would not match predictions/calculations.”
But I have to ask, just to “drive it home” intuitively: let’s say we had degenerate matter, let’s call it a neutron star. This star was moving horizontally relative to me (so not spinning). Even in that case, would I observe it to length contract or collapse into a black hole?
It would length contract, right? | 2018-11-16 01:16:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 74, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8714452385902405, "perplexity": 375.23755514033735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742968.18/warc/CC-MAIN-20181116004432-20181116030432-00107.warc.gz"} |
https://rafreeman.com/zettelkasten/20200909132834.html | # Discrete Mathematics
Truth Value: Either true or false. Statement: Collection of words with defined truth values. Not a question. Open Sentence: Statement which is true or false based on inputs.
EITHER: $P \vee Q$ AND: $P \wedge Q$ NOT: $\sim P$ XOR: $P \oplus Q$
Theorem 1.22—$\vee$ and $\wedge$ are commutative: $P\vee Q\equiv Q\vee P$ and $P\wedge Q\equiv Q\wedge P$.
Chartrand, Gary; Zhang, Ping. Discrete Mathematics (Page 18). Waveland Pr Inc. Kindle Edition.
##### De Morgan's Laws
$\sim (P \vee Q) = (\sim P) \wedge (\sim Q)$ $\sim (P \wedge Q) = (\sim P) \vee (\sim Q)$
### Implication
If $P$ then $Q$: $P \Rightarrow Q$ Converse: $Q \Rightarrow P$ Counterpositive: $(\sim Q)\Rightarrow (\sim P)$ Theorem 1.48—An implication is equivalent to its counterpositive: $P \Rightarrow Q \equiv (\sim Q)\Rightarrow (\sim P)$ Theorem 1.48: $P\Rightarrow Q\equiv(\sim P)\wedge Q$. Theorem 1.50: $\sim(P\Rightarrow Q)\equiv P \wedge (\sim Q)$. | 2021-07-30 04:41:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5525013208389282, "perplexity": 4220.465764689875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00114.warc.gz"} |
https://www.eduzip.com/ask/question/finf-the-angle-if-its-supplement-is-three-times-of-its-complement-521192 | Mathematics
# Finf the angle, if its supplement is three times of its complement.
45
##### SOLUTION
Let the required angle be $x.$
Its complement $=\left( 90-x \right)$ & its supplement $=\left( 180-x \right)$
Given that, Its supplement $=3$ times of its complement,
$\Rightarrow \left( 180-x \right) =3\times \left( 90-x \right)$
$\Rightarrow 180-x=270-3x$
$\Rightarrow 3x-x=270-180$
$\Rightarrow 2x=90$
$\Rightarrow x={ 45 }^{ o }$
You're just one step away
One Word Medium Published on 09th 09, 2020
Questions 120418
Subjects 10
Chapters 88
Enrolled Students 87
#### Realted Questions
Q1 Single Correct Medium
In fig. 2, lines $l_{1}||l_{2}$. The value of $x$ is :
• A. $70^{\circ}$
• B. $40^{\circ}$
• C. $50^{\circ}$
• D. $30^{\circ}$
Asked in: Mathematics - Straight Lines
1 Verified Answer | Published on 17th 08, 2020
Q2 Subjective Medium
In the following diagram which three lines are perpendicular to line l? Show them symbolically:
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 23rd 09, 2020
Q3 Subjective Medium
Identify whether the following pairs of angles are complementary or supplementary:
$45^o, 45^o$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q4 Subjective Medium
Find the value of the unknown interior angle x in the following figure.
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q5 Single Correct Medium
The set of values of $\mathrm { b }$ for which the origin and the point $( 1,1 )$ lie on the same side of the straight line, $a ^ { 2 } x + a b y + 1 = 0 \forall a \in R , b > 0$ are:
• A. $b \in ( 2,4 )$
• B. $b \in [ 0,2 ]$
• C. None of these
• D. $b \in (0,2 )$
Asked in: Mathematics - Straight Lines
1 Verified Answer | Published on 17th 08, 2020 | 2022-01-28 18:41:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3912408649921417, "perplexity": 7664.9186409282665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306335.77/warc/CC-MAIN-20220128182552-20220128212552-00509.warc.gz"} |
http://jarrettmeyer.com/2016/03/28/an-ember-multiselect-checkbox | Last week, I needed to create a series of multi-select checkboxes for a project that I’m working on. The client side framework is EmberJS. There were a few open-source options out there. Unfortunately, they use the existing Ember-CLI. Our project, being a bit older and out of date, cannot use Ember CLI.
But, rolling your own isn’t so difficult. Let’s start with what we’re trying to accomplish. We want to turn a list of checked items into an array of strings.
// Our model has a property called "permissions". This property is an
// array of strings.
App.IndexRoute = Ember.Router.extend({
model: function () {
return {
/* snip */
permissions: []
/* snip */
};
}
});
// Our controller has an array of available options. As usual, the label
// is displayed on screen. The value is the string that is stored on the
// model.
App.IndexController = Ember.Controller.extend({
label: 'Can create users?',
value: 'create_users'
}, {
label: 'Can disable users?',
value: 'disable_users'
}, {
label: 'Can edit users?',
value: 'edit_users'
}],
// This is used to display the selected permissions in the UI. It is
// not required as part of the solution.
selectedAsString: Ember.computed('model.permissions.[]', function () {
return JSON.stringify(this.get('model.permissions'));
})
});
Our component is written in two parts. The first part is the checkbox element itself. The second is the actual component.
// Each available option becomes an instance of a "MultiSelectCheckbox" object.
var MultiSelectCheckbox = Ember.Object.extend({
label: 'label',
value: 'value',
isChecked: false,
changeValue: function () { },
onIsCheckedChanged: Ember.observer('isChecked', function () {
var fn = (this.get('isChecked') === true) ? 'pushObject' : 'removeObject';
this.get('changeValue').call(this, fn, this.get('value'));
})
});
App.MultiSelectCheckboxesComponent = Ember.Component.extend({
labelProperty: 'label',
valueProperty: 'value',
// The list of available options.
options: [],
// The collection of selected options. This should be a property on
// a model. It should be a simple array of strings.
selected: [],
checkboxes: Ember.computed('options', function () {
var _this = this;
var labelProperty = this.get('labelProperty');
var valueProperty = this.get('valueProperty');
var selected = this.get('selected');
return this.get('options').map(function (opt) {
var label = opt[labelProperty];
var value = opt[valueProperty];
var isChecked = selected.contains(value);
return MultiSelectCheckbox.create({
label: label,
value: value,
isChecked: isChecked,
changeValue: function (fn, value) {
_this.get('selected')[fn](value);
}
});
});
})
});
Here is our (very simple) component template.
{{#each checkboxes as |checkbox|}}
<p>
<label>
{{input type='checkbox' checked=checkbox.isChecked}}
{{checkbox.label}}
</label>
</p>
{{/each}}
Finally, to make use of our component, write the following in your template.
{{multi-select-checkboxes options=adminOptions selected=model.permissions}}
That’s really all it takes. A fully working example of this code is available at JSBin. | 2018-06-19 08:54:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.172014519572258, "perplexity": 10178.621766817208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861981.50/warc/CC-MAIN-20180619080121-20180619100121-00483.warc.gz"} |
https://www.zbmath.org/?q=an%3A1115.76003 | # zbMATH — the first resource for mathematics
Microflows and nanoflows. Fundamentals and simulation. Foreword by Chih-Ming Ho. (English) Zbl 1115.76003
Interdisciplinary Applied Mathematics 29. New York, NY: Springer (ISBN 0-387-22197-2/hbk). xxi, 817 p. (2005).
The main differences between fluid mechanics at microscales and in the macrodomain can be broadly classified into four areas: $$1^\circ$$ noncontinuum effects, $$2^\circ$$ surface-dominated effects, $$3^\circ$$ low Reynolds number effects, $$4^\circ$$ multiscale and multiphysics effects. The monograph under review presents a systematical presentation of all questions connected with fundamentals and simulation of microflows and nanoflows. This material is divided into three main categories: a) gas flows (chapters 2–6), b) liquid flows (chapters 7–13), c) simulation techniques (chapters 14–18).
In Ch. 1 many concepts and devices are given which are discussed in detail in the monograph. Following historical reasons, Ch. 1 begins with some prototype of Micro-Electro-Mechanical-systems (MEMS) devices and discusses such fundamental concepts as breakdown of constitutive laws, new flow regimes, and modeling issues encountered in microfluid and nanofluid systems. Fluid-surface interactions for liquids are discussed such as electrokinetic effects and wetting, important at very small scales. The question of full-system simulation of MEMS is stated, and the concept of macromodeling is introduced.
Ch. 2 presents the basic equations of fluid dynamics for both incompressible and compressible flows and discusses appropriate nondimensionalizations for low-speed and high-speed flows. Most of the flows encountered in microsystems are, in general, of low speed, however micropropulsion applications may involve high-speed supersonic flows. Compressible Navier-Stokes equations are considered with general boundary condition for velocity slip, which are applied to a regime corresponding to second-order correction in Knudsen number.
In Ch. 3 shear-driven gas microflows are considered with the objective of modeling a certain class of flows arising in microsystems. In particular, shear-driven microflows are the flows between the rotor and base plate of a micromotor, and the flows between stationary and movable arms of a comb-drive mechanism. The authors concentrate on prototype flows such as linear Couette flow, and flow in shear-driven microcavities and grooved microchannels, in order to overcome the difficulties of flow physics for complex engineering geometries. At first, analytical and numerical results are presented for steady Couette flow in the slip flow regime. Then the development and validation of an empirical model for steady Couette flow are presented in the transition and free-molecular flow regimes. Simulation results and analysis for oscillatory shear-driven flows in the entire Knudsen regime are given. Flows in prototype complex geometries, such as the microcavity and grooved microchannel flows, are included.
In Ch. 4 models for pressure-driven gas flows in the slip, transition and free molecular flow regimes are presented. The authors are interested in microchannel, pipe, and duct flows as having primary engineering importance with analytical solutions caused by their simple geometry. For transition and free-molecular flow regimes a unified flow model is developed which can accurately product the volumetric flowrate, velocity profile, and pressure distribution in the whole Knudsen regime for pipes and ducts, and also for the minimal Knudsen number.
In Ch. 5 heat transfer in gas microflows is considered. Thermal creep (transpiration) effects is analyzed important for channels with tangential temperature gradients on their surfaces, in particular, a microchannel surface with a prescribed heat flux subjected to temperature variations along its surface together with results on thermal creep flows. Then other temperature-induced flows are studied, and the validity of the heat conduction equation is investigated in various limiting cases. Combined effects of thermal creep, heat conduction, and convection in pressure-, force-, and shear-driven channel flows are also investigated.
Ch. 6 is devoted to rarefied gas flows encountered in applications other than simple microchannels. At first, the lubrication theory is considered with special attention on the slider bearing and squeezed film problems. Then follow the separated flows in internal and external geometries in the ship flow regime. Further, the theoretical and numerical results for Stokes flow past a sphere are presented. The classical Stokes drag for external flows, including rarefaction effects in the ship flow regime, is reviewed with the presentation of drag formulae for the pressure-driven flows past a stationary sphere confined in a pipe. Their verifications are given in connection with numerical simulations in the slip flow regime, that shows drastic variations in the drag coefficient as a function of Knudsen number and the cylinder/sphere blocking ratio. The limiting results are considered applicable to liquid flows past solid electrically neutral spheres. Recent findings on gas flows through microfilters are summarized together with the investigation of high-speed rarefied flows in micronozzles, with are used for controlling the motion of microsatellites and nanosatellites.
With Ch. 7 starts the second part “Liquid flows” of the monograph. Here the authors review and explore ideas of microflow control elements at the usage of electrokinetic flow control schemes, with do not require any moving components. Electroosmotic and electrophoretic transport is covered in detail both for steady and time-periodic flows, and electrophoresis is presented allowing separation and detection of similar size particles based on their polarizability.
Ch. 8 is devoted to surface tension-driven flows and capillary phenomena involving wetting and spreading of liquid thin films and droplets for modeling of classical engineering applications such as coating and lubrication. For microscopic delivery on open surfaces, electrowetting and thermocapillary along with dielectrophoresis have been employed to move continuously on discrete streams of fluid, for example droplets along specified paths on glass surfaces. A new method of actuation exploits optical beams and photoconductor materials in conjunction with electrowetting. Such electrically or lithographically defined paths can be reconfigured dynamically using electronically addressable arrays that respond to electric potential, temperature, or laser beams and control the direction, timing, and speed of fluid droplets. Here microfluidic transport mechanisms are studied based on capillary phenomena taking advantage of the relative importance and sensitivity of surface tension in microscales. Particularly, the authors study how temperature, electric potential, and light can affect the value and possibly the sign of surface tension.
In Ch. 9 the basic ideas of micromixers and chaotic advection are presented and analytic solutions for prototypical problems are given. In microchannels the flow is laminar and steady, so diffusion is controlled solely by the diffusivity coefficient of the medium, thus requiring excessive amounts of time for complete mixing. Examples of passive and active mixers are discussed which have been used in microfluidic applications. Some quantitative measures of characterizing mixing are provided, based on the concept of Lyapunov exponent from chaos theory as well as some convenient ways for their computation.
Ch. 10 is devoted to simple liquids is nanochannels described by standard Lennard-Jones potentials. A key difference between the simulation of the fluid transport in confined nanochannels, where the critical channel dimension can be a few molecular diameters, and at macroscopic scales is that the well-established continuum theories based on Navier-Stokes equations may not be valid in confined nanochannels. Therefore atomistic scale simulations, in which the fluid atoms are modeled explicitly or semi-explicitly and the motion of the fluid atoms is calculated directly, shed fundamental insights on fluid transport. Here density profiles, diffusion transport and Navier-Stokes equations validity are discussed for simple fluids in confined nanochannels. Finally, the slip conditions at solid-fluid interfaces are discussed and experimental and computational results together with conceptual models of slip are presented. Also the lubrication problem first discussed in Ch. 7 is revisited, and the Reynolds-Vinogradova theory for hydrophobic surfaces is presented.
Water and its properties in various forms is one of the most actively investigated areas because of its importance in nature. After introducing some definitions and atomistic models for water, the authors present in Ch. 10 the static and dynamic behavior of water in confined nanochannels. In Ch. 11 the fundamentals and simulation of electroosmotic flow in nanochannels are discussed. The significance of the finite size of ions and the discrete nature of solvent molecules are highlighted. A slip boundary condition which can be used in the hydrodynamic theory for nanochannels electroosmotic flows is presented. The physical mechanisms that lead to charge inversion and corresponding flow reversal phenomena in nanochannel electroosmotic flows are discussed.
The last Ch. 13 of the second part focuses on functional fluids and on functionalized devices, specifically on nanotubes. Here details of the physical mechanisms involved in self-assembly are presented, and examples of patterns are given which are formed using magnetic fields for magneto-rheological fluids and electrophoretic deposition for electro-rheological fluids. The authors give a brief introduction to carbon nanotubes and ion channels in biological membranes, and present results on electrolyte transport through carbon nanotubes together with concepts showing that the transport of electrolytes can be augmented by using functionalized nanotubes and electric fields.
Ch. 14 of the last part “Simulation techniques” contains three main numerical methodologies to analyze flows in microdomains: $$1^\circ$$ High-order finite element (spectral element) methods for Navier-Stokes equations; formulations for both incompressible and compressible flows in stationary and moving domains are presented. $$2^\circ$$ Meshless methods with random point distribution. $$3^\circ$$ The force coupling method for particulate microflows. These are three different classes of discretization. In Ch. 15 the theory and numerical methodologies are discussed for simulating gas flows at mesoscopic and atomistic levels. Here an overview of the Boltzmann equation is given, describing in some detail gas-surface interactions with benchmark solutions for validation of numerical codes and macromodels. The main result relevant for bridging microdynamics and macrodynamics is the Boltzmann equation, which is discussed by using lattice Boltzmann methods as the $$H$$-theorem. In Ch. 16 the theory and numerical methodologies for simulating liquid flows are discussed at atomistic and mesoscopic levels. In Ch. 17 the authors turn to simulating full systems across heterogeneous domains, i.e. fluid, thermal, electrical, structural, chemical etc. Several reduced-order modeling techniques for the analysis of microsystems are introduced, such as generalized Kirchhoff networks, black box models, Galerkin method. The advantages and limitations of various techniques are discussed. Ch. 18 considers some applications of these techniques to several examples in microflows. Here reduced-order modeling of squeezed film damping is investigated by applying equivalent circuit, Galerkin, mixed-level and black box models. A compact model for electrowetting is discussed. Some of the software packages are summarized that are available for reduced-order simulation.
The reviewed monograph is the first systematic fundamental presentation of the subject. It is suitable for graduate students and researches in fluid mechanics, physics and in electrical, mechanical and chemical engineering.
##### MSC:
76-02 Research exposition (monographs, survey articles) pertaining to fluid mechanics 76Dxx Incompressible viscous fluids 76N15 Gas dynamics (general theory) 76Mxx Basic methods in fluid mechanics 76A02 Foundations of fluid mechanics
Full Text: | 2021-08-01 22:28:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5059917569160461, "perplexity": 1847.7608984443123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00487.warc.gz"} |
https://socratic.org/questions/how-do-you-find-the-slope-and-intercept-of-y-1-5x-5#245071 | # How do you find the slope and intercept of y=1/5x+5?
Mar 25, 2016
$m = \frac{1}{5} , \text{ y-intercept = 5 }$
#### Explanation:
The equation of a line in the form y = mx + c , where m represents the gradient (slope ) and c , the y-intercept , is useful in that the values of m and c , can be extracted ' easily'.
the equation $y = \frac{1}{5} x + 5 \text{ is in this form }$
$\Rightarrow \text{ slope " = 1/5 " and y-intercept } = 5$
Here is the graph.
graph{1/5x+5 [-20, 20, -10, 10]} | 2022-01-24 21:07:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8841364979743958, "perplexity": 773.9350319717456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304600.9/warc/CC-MAIN-20220124185733-20220124215733-00316.warc.gz"} |
https://www.hackmath.net/en/math-problem/889 | # Tower
How many m2 of copper plate should be to replace roof of the tower conical shape with diameter 24 m and the angle at the vertex of the axial section is 144°?
Result
S = 476 m2
#### Solution:
$r = 24/2 = 12 \ m \ \\ s = r / \sin(144 ^\circ / 2) = 12.62 \ m \ \\ \ \\ S = \pi r s = \pi \cdot 12 \cdot 12.62 = 476 \ \text{m}^2$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Do you want to convert length units?
## Next similar math problems:
1. Reflector
Circular reflector throws light cone with a vertex angle 49° and is on 33 m height tower. The axis of the light beam has with the axis of the tower angle 30°. What is the maximum length of the illuminated horizontal plane?
2. Equilateral triangle
How long should be the minimum radius of the circular plate to be cut equilateral triangle with side 19 cm from it?
3. 30-60-90
The longer leg of a 30°-60°-90° triangle measures 5. What is the length of the shorter leg?
4. SAS triangle
The triangle has two sides long 7 and 19 and included angle 36°. Calculate area of this triangle.
5. Flowerbed
Flowerbed has the shape of an isosceles obtuse triangle. Arm has a size 5.5 meters and an angle opposite to the base size is 94°. What is the distance from the base to opposite vertex?
6. Cable car 2
Cable car rises at an angle 41° and connects the upper and lower station with an altitude difference of 1175 m. How long is the track of cable car?
7. Center traverse
It is true that the middle traverse bisects the triangle?
8. Trigonometry
Is true equality? ?
9. Height 2
Calculate the height of the equilateral triangle with side 38.
10. Sines
In ▵ ABC, if sin(α)=0.5 and sin(β)=0.6 calculate sin(γ)
11. Reference angle
Find the reference angle of each angle:
12. The cable car
The cable car has a length of 3,5 kilometers and an angle of climb of 30 degrees. What is the altitude difference between Upper and Lower Station?
13. Theorem prove
We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started?
14. Cable car
Cable car rises at an angle 45° and connects the upper and lower station with an altitude difference of 744 m. How long is "endless" tow rope?
15. An angle
An angle x is opposite side AB which is 10, and side AC is 15 which is hypotenuse side in triangle ABC. Calculate angle x.
16. High wall
I have a wall 2m high. I need a 15 degree angle (upward) to second wall 4 meters away. How high must the second wall?
17. Tree
How tall is the tree that observed in the visual angle of 52°? If I stand 5 m from the tree and eyes are two meters above the ground. | 2020-05-30 09:07:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6355167031288147, "perplexity": 1352.9358707777858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407667.28/warc/CC-MAIN-20200530071741-20200530101741-00594.warc.gz"} |
http://math.au.dk/aktuelt/aktiviteter/event/item/4643/ | # The Riesz–Thorin Theorem
Matthias Engelmann
Foredrag for studerende
Fredag, 7 december, 2012, at 14:30-15:30, in Aud. D3 (1531-215)
Abstrakt:
I am going to present the Riesz-Thorin Theorem, a result which is often referred to as interpolation theory between $\mathrm{L}^p(\mathbb{R}^n)$ spaces. Let $T$ be a bounded linear map from $\mathrm{L}^{p_0}(\mathbb{R}^n)$ to $\mathrm{L}^{q_0}\(\mathbb{R}^n)$ and from $\mathrm{L}^{p_1}(\mathbb{R}^n)$ to $\mathrm{L}^{q_1}\(\mathbb{R}^n)$. Loosely speaking the theorem asserts that the set of all pairs of indices $(1/p,1/q)$ for which $T$ is bounded is a convex set. More precisely T is a bounded map from $\mathrm{L}^{p_t}(\mathbb{R}^n)$ to $\mathrm{L}^{q_t}\(\mathbb{R}^n)$, where $1/p_t = t/p_1 + (1-t)/p_0$ and $1/q_t = t/q_1 + (1-t)/q_0$ and $t\in[0,1]$. The main ingredients of the proof are basic results from Banach space theory, integration theory and the Hadamard three line theorem. If time allows, I will provide some applications in functional analysis.
Kontaktperson: Søren Fuglede Jørgensen | 2019-10-14 21:25:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194412589073181, "perplexity": 214.43015136195166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655310.17/warc/CC-MAIN-20191014200522-20191014224022-00168.warc.gz"} |
https://math.stackexchange.com/questions/1399048/the-product-of-quotient-spaces-is-a-quotient-space | # The product of quotient spaces is a quotient space
Let $(X,\tau)$ be a topological space and let $\sim$ be an equivalence relation on $X$. Now define an equivalence relation $\approx$ on $X \times X$ by $[(x,y)]_\approx = [x]_\sim \times[y]_\sim$
Is it true that $X/\sim \times$ $X/\sim$ $\cong (X\times X)/\approx$ ?
$[x]_\sim \times [y]_\sim$ and $[(x,y)]_\approx$ as sets are bijective. In one case you first take the projection to the cosets, and then the product. In the other you first take the product and then project to the cosets defined with the above equivalence independantly on the two components.
It is straightforward that any open set in $\tau$ is mapped to the "same" set in the two spaces which is how their topology is induced therefore they are homeomorphic. | 2019-12-08 10:57:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9440920352935791, "perplexity": 71.88714564115023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00111.warc.gz"} |
https://www.maplesoft.com/support/help/Maple/view.aspx?path=plot3d | plot3d - Maple Programming Help
Home : Support : Online Help : Graphics : 3-D : plot3d
plot3d
three-dimensional plotting
Calling Sequence plot3d(expr, x=a..b, y=c..d, opts) plot3d(f, a..b, c..d, opts) plot3d([exprf, exprg, exprh], s=a..b, t=c..d, opts) plot3d([f, g, h], a..b, c..d, opts)
Parameters
expr - expression in x and y f, g, h - procedures or operators exprf, exprg, exprh - expressions in s and t a, b - real constants, procedures, or expressions in y c, d - real constants, procedures, or expressions in x x, y, s, t - names opts - (optional) equations of the form option=value where option is described in plot3d/option
Description
• The plot3d command computes the plot of a three-dimensional surface. The first two calling sequences describe surface plots in Cartesian coordinates, while the second two describe parametric surface plots.
Other plotting facilities include the plot command for 2-D plotting, the plots package for specialized plots and the plottools package for plotting objects.
For further resources for plotting, and a pictorial listing of the available types of plots, see the Plotting Guide. Note that this guide is only available in the Standard interface.
• Maple includes the Interactive Plot Builder, which provides a point-and-click interface to the plotting functionality including two and three-dimensional plots, animations, and interactive plots with sliders. To launch the Plot Builder, run the plots[interactive] command. You can also launch the Plot Builder in the Standard Worksheet from the Tools menu. Select Assistants, and then Plot Builder. For more information, see Using the Interactive Plot Builder.
• In the first calling sequence, plot3d(expr, x=a..b, y=c..d), the expression expr must be an expression in the names x and y. The range a..b must evaluate to real constants and the range c..d must either evaluate to real constants or be expressions in x. Alternatively, the range c..d must evaluate to real constants and the range a..b must either evaluate to real constants or be expressions in y. These specify the range over which expr is plotted.
• In the second calling sequence, plot3d(f, a..b, c..d), f must be a procedure or operator that takes two arguments. Operator notation must be used, that is, the procedure name is given without parameters specified, and the ranges must be given simply in the form a..b, rather than as an equation. At least one of the ranges must have arguments evaluating to real constants; the other range may have arguments evaluating to real constants or be procedures of one variable.
• A parametric surface can be defined by three expressions exprf, exprg, exprh in two variables. In the third calling sequence, plot3d([exprf, exprg, exprh], s=a..b, t=c..d), exprf, exprg, and exprh must be expressions in the names s and t. In the fourth calling sequence, plot3d([f, g, h], a..b, c..d), f, g, and h must be procedures or operators taking at most two arguments. As with the second calling sequence, operator notation must be used.
• With any of these calling sequences, the range arguments may be omitted. In that case, the plot3d command assumes default ranges of -10 to 10, or $-2\mathrm{\pi }$ to $2\mathrm{\pi }$ in the case where a trigonometric function is detected. The first argument f or expr can also be omitted or set to the empty list [], in which case an empty plot is created.
• Any additional arguments are interpreted as options, which are specified as equations of the form option = value. For example, the option grid = [m, n] where m and n are positive integers, specifies that the plot is to be constructed on an m by n grid at equally spaced points in the ranges a..b and c..d respectively. By default, a 49 by 49 grid is used and 2401 points are generated. Other options include specification of alternate coordinate systems and rendering styles. For more information, see plot3d/options.
• If the first argument in any of the calling sequences is a set or list of surfaces, then the surfaces are plotted together. If a list is provided, then particular option values can also be given as lists, with elements corresponding to elements of the list of surfaces. The options that can take lists as values are: color, coords, grid, linestyle, numpoints, shading, style, symbol, symbolsize, thickness, and transparency. A list of three algebraic expressions or procedures is always interpreted as a parametric plot. To specify a list of three distinct plots, the option plotlist=true (or simply plotlist) must be provided.
• Plots in alternative coordinate systems, such as spherical and cylindrical systems, can be generated by using the coords option. For more information, see the examples below or the plot3d/coords help page.
• There are several ways to color 3-D surfaces created by the plot3d command. See the plot/color, plot3d/colorfunc and plot/colorscheme help pages for more information.
• When plot3d evaluates its arguments, any errors generated during the evaluation are suppressed. A symptom that something has gone wrong with the evaluation of your expression is a resulting empty plot.
• Help pages describing plotting commands and interactive plotting features are written with the assumption that you are using the Standard Worksheet interface. If you are using a different interface, see plot/interface.
• An output device may be specified using the plotsetup command. See plot/device for a list of supported devices.
• The result of a call to plot3d is a PLOT3D data structure containing enough information to render the plot. The user can assign a PLOT3D value to a variable, save it in a file, then read it in for redisplay. For more information, see plot3d/structure.
• All plotted expressions are evaluated numerically, that is, as floating point expressions, rather than symbolically. For more information about the computational environment used by the plot3d function, see plot/computation.
Examples
Default ranges in 3-D plots
For trigonometric functions, default ranges of $-2\mathrm{\pi }$ .. $2\mathrm{\pi }$ is used.
> $\mathrm{plot3d}\left(\mathrm{sin}\left(x\right)\mathrm{cos}\left(y\right)\right)$
Default ranges of -10..10 are used when the range arguments are not provided.
> $\mathrm{plot3d}\left(xy,y=0..1\right)$
Generating three-dimensional surfaces using expressions or procedures
When plotting an expression in two variables, the range for each variable must be provided in the form of x=a..b and y=c..d, where x and y are the variables used in the expression.
> $\mathrm{plot3d}\left(\mathrm{sin}\left(x+y\right),x=-1..1,y=-1..1\right)$
To plot a procedure that is a function of two variables (for example, binomial), give the procedure in operator notation (that is, without parameters). Also, the ranges must be given in the form a..b, not as equations.
> $\mathrm{plot3d}\left(\mathrm{binomial},0..5,0..5\right)$
The following example defines a functional operator of two variables and plots it.
> $f≔\left(x,y\right)↦{x}^{2}-{y}^{2}:$
> $\mathrm{plot3d}\left(f,-1..1,-1..1\right)$
Using variable expressions or procedures in a range
You can specify a variable expression in one or both endpoints of a range as long as the other range contains real constants for both its endpoints. In the following example, the endpoints for x are given as real constants while the endpoints for y are expressions in x.
> $\mathrm{plot3d}\left(\mathrm{sin}\left(xy\right),x=-\frac{\mathrm{\pi }}{2}..\frac{\mathrm{\pi }}{2},y=-x..x\right)$
Alternatively, the endpoints for y can be specified as real constants and the endpoints for x given as expressions in y.
> $\mathrm{plot3d}\left(\mathrm{sin}\left(xy\right),x=-y..y,y=-\frac{\mathrm{\pi }}{2}..\frac{\mathrm{\pi }}{2}\right)$
The following example uses one procedure for the surface to be plotted and another procedure in the range for the second variable.
> p:= proc(x,y) if x^2 < y then cos(x*y) else x*sin(x*y) end if end proc:
> h:= proc(x) x^2 end proc:
> $\mathrm{plot3d}\left(p,-2..2,-1..h\right)$
Drawing smoother curves with the grid option
The default 49 by 49 grid may be too coarse for a plot, especially if the surface changes rapidly over the plotting range.
> $\mathrm{plot3d}\left(\mathrm{sin}\left(x\right)+\frac{\mathrm{sin}\left(20y\right)}{4},x=-\mathrm{\pi }..\mathrm{\pi },y=-\mathrm{\pi }..\mathrm{\pi }\right)$
Use the grid = [m, n] option to specify a finer grid and show more detail in your plot.
> $\mathrm{plot3d}\left(\mathrm{sin}\left(x\right)+\frac{\mathrm{sin}\left(20y\right)}{4},x=-\mathrm{\pi }..\mathrm{\pi },y=-\mathrm{\pi }..\mathrm{\pi },\mathrm{grid}=\left[50,200\right]\right)$
Specifying the surface color
The following command shows the default color for a three-dimensional surface.
> $\mathrm{plot3d}\left(x\mathrm{exp}\left(-{x}^{2}-{y}^{2}\right),x=-2..2,y=-2..2,\mathrm{grid}=\left[100,100\right]\right)$
To set the color, use the color=cname option, where cname is one of the predefined plot color names.
> $\mathrm{plot3d}\left(x\mathrm{exp}\left(-{x}^{2}-{y}^{2}\right),x=-2..2,y=-2..2,\mathrm{grid}=\left[100,100\right],\mathrm{color}="SkyBlue"\right)$
You can also set the color option to an expression or procedure, or use the colorscheme option to color by $z$ values.
> $\mathrm{plot3d}\left(x\mathrm{exp}\left(-{x}^{2}-{y}^{2}\right),x=-2..2,y=-2..2,\mathrm{color}=x\right)$
> $\mathrm{plot3d}\left(x\mathrm{exp}\left(-{x}^{2}-{y}^{2}\right),x=-2..2,y=-2..2,\mathrm{colorscheme}=\left["Blue","LimeGreen"\right]\right)$
> p:= proc(x,y) if x^2 < y then cos(x*y) else x*sin(x*y) end if end proc:
> h:= proc(x) x^2 end proc:
> $\mathrm{plot3d}\left(p,-2..2,-1..h,\mathrm{color}=h\right)$
Displaying multiple surfaces in one plot
Use sets or lists to display more than one surface in the same three-dimensional plot.
> $\mathrm{plot3d}\left(\left[\mathrm{sin}\left(xy\right),x+2y\right],x=-\mathrm{\pi }..\mathrm{\pi },y=-\mathrm{\pi }..\mathrm{\pi }\right)$
> $\mathrm{c1}≔\left[\mathrm{cos}\left(x\right)-2\mathrm{cos}\left(0.4y\right),\mathrm{sin}\left(x\right)-2\mathrm{sin}\left(0.4y\right),y\right]:$
> $\mathrm{c2}≔\left[\mathrm{cos}\left(x\right)+2\mathrm{cos}\left(0.4y\right),\mathrm{sin}\left(x\right)+2\mathrm{sin}\left(0.4y\right),y\right]:$
> $\mathrm{c3}≔\left[\mathrm{cos}\left(x\right)+2\mathrm{sin}\left(0.4y\right),\mathrm{sin}\left(x\right)-2\mathrm{cos}\left(0.4y\right),y\right]:$
> $\mathrm{c4}≔\left[\mathrm{cos}\left(x\right)-2\mathrm{sin}\left(0.4y\right),\mathrm{sin}\left(x\right)+2\mathrm{cos}\left(0.4y\right),y\right]:$
> $\mathrm{plot3d}\left(\left\{\mathrm{c1},\mathrm{c2},\mathrm{c3},\mathrm{c4}\right\},x=0..2\mathrm{\pi },y=0..10,\mathrm{grid}=\left[25,15\right],\mathrm{color}=\mathrm{sin}\left(x\right)\right)$
To specify a different color for each surface, set the color option to a list of colors or expressions.
> $\mathrm{plot3d}\left(\left[\mathrm{sin}\left(xy\right),x+2y\right],x=-\mathrm{\pi }..\mathrm{\pi },y=-\mathrm{\pi }..\mathrm{\pi },\mathrm{color}=\left["Navy",xy\right]\right)$
Plotting a parametric surface
If the first argument is a list of three surfaces, plot3d produces a plot of the parametric surface.
> $\mathrm{plot3d}\left(\left[x\mathrm{sin}\left(x\right)\mathrm{cos}\left(y\right),x\mathrm{cos}\left(x\right)\mathrm{cos}\left(y\right),x\mathrm{sin}\left(y\right)\right],x=0..2\mathrm{\pi },y=0..\mathrm{\pi }\right)$
To not have a list of three surfaces interpreted as a parametric plot, either provide the plotlist=true option or use a set for the surfaces.
> $\mathrm{plot3d}\left(\left[x\mathrm{sin}\left(x\right)\mathrm{cos}\left(y\right),x\mathrm{cos}\left(x\right)\mathrm{cos}\left(y\right),x\mathrm{sin}\left(y\right)\right],x=0..2\mathrm{\pi },y=0..\mathrm{\pi },\mathrm{plotlist}=\mathrm{true}\right)$
> $\mathrm{plot3d}\left(\left\{x\mathrm{sin}\left(y\right),x\mathrm{cos}\left(x\right)\mathrm{cos}\left(y\right),x\mathrm{sin}\left(x\right)\mathrm{cos}\left(y\right)\right\},x=0..2\mathrm{\pi },y=0..\mathrm{\pi }\right)$
Using different coordinate systems
Use the coords option to specify a different coordinate system for your plot. See plot3d/coords for a list of the available coordinate systems and information on how they are interpreted by plot3d. The coords page gives a description for each of these coordinate systems.
The following three commands show plots using spherical and toroidal coordinates.
> $\mathrm{plot3d}\left({1.3}^{x}\mathrm{sin}\left(y\right),x=-1..2\mathrm{\pi },y=0..\mathrm{\pi },\mathrm{coords}=\mathrm{spherical}\right)$
> $\mathrm{plot3d}\left(\left[1,x,y\right],x=0..2\mathrm{\pi },y=0..2\mathrm{\pi },\mathrm{coords}=\mathrm{toroidal}\left(10\right),\mathrm{scaling}=\mathrm{constrained}\right)$
> $\mathrm{plot3d}\left(\left[1,x,y\right],x=0..2\mathrm{\pi },y=0..2\mathrm{\pi },\mathrm{coords}=\mathrm{toroidal}\left(10\right),\mathrm{scaling}=\mathrm{constrained},\mathrm{style}=\mathrm{contour}\right)$
The following command generates the plot of the Möbius strip from the Plotting Guide using cylindrical coordinates.
> $\mathrm{plot3d}\left(\left[4+x\mathrm{cos}\left(\frac{1}{2}y\right),y,x\mathrm{sin}\left(\frac{1}{2}y\right)\right],x=-\mathrm{\pi }..\mathrm{\pi },y=0..2\mathrm{\pi },\mathrm{coords}=\mathrm{cylindrical},\mathrm{style}=\mathrm{patchnogrid},\mathrm{grid}=\left[60,60\right],\mathrm{orientation}=\left[35,135\right],\mathrm{lightmodel}=\mathrm{light4},\mathrm{shading}=\mathrm{zhue},\mathrm{scaling}=\mathrm{constrained},\mathrm{transparency}=0.3\right)$
Drawing smoother edges with the adaptmesh option
Surfaces which are not defined over the entirety of a supplied rectangular domain can be drawn with smoother edges by supplying the adaptmesh option.
> $\mathrm{plot3d}\left(\mathrm{sqrt}\left(1-{x}^{2}-{y}^{2}\right)+\frac{\mathrm{sqrt}\left({\left(x-\frac{1}{3}\right)}^{2}+{\left(y-\frac{1}{2}\right)}^{2}-\frac{1}{4}\right)}{2},x=-1.1..1.1,y=-1.1..1.1,\mathrm{adaptmesh}\right)$
Generating empty plot
> $\mathrm{plot3d}\left(\mathrm{title}="An empty plot"\right)$
Compatibility
• The plot3d command was updated in Maple 2015. | 2020-06-01 05:49:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 37, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8100274205207825, "perplexity": 1702.5772957965874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347414057.54/warc/CC-MAIN-20200601040052-20200601070052-00586.warc.gz"} |
http://openstudy.com/updates/526f48a2e4b0e209601e5bc9 | Here's the question you clicked on:
## osanseviero Group Title Is there a limit for this series?: 10 months ago 10 months ago
• This Question is Closed
1. osanseviero
$an = \left\{ an= \frac{ 1 }{ 2 }\left( a _{n-1}+\frac{ 10 }{ a _{n-1} } \right) \right\} for n \ge2$
2. osanseviero
Can I get it with the things I have? | 2014-09-24 02:35:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7681908011436462, "perplexity": 4502.799913368655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657141651.17/warc/CC-MAIN-20140914011221-00325-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/mfc.2021014?viewType=html | # American Institute of Mathematical Sciences
doi: 10.3934/mfc.2021014
Online First
Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.
Readers can access Online First articles via the “Online First” tab for the selected journal.
## Convex combination of data matrices: PCA perturbation bounds for multi-objective optimal design of mechanical metafilters
1 IMT School for Advanced Studies, AXES Research Unit, Piazza S. Francesco, 19, 55100 Lucca, Italy 2 University of Genoa, Department of Civil, Chemical and Environmental Engineering, Via Montallegro, 1, 16145 Genova, Italy
* Corresponding author: Giorgio Gnecco
Received April 2021 Revised July 2021 Early access August 2021
Fund Project: A. Bacigalupo and G. Gnecco are members of INdAM. The authors acknowledge financial support from INdAM-GNAMPA, from INdAM-GNFM (project Trade-off between Number of Examples and Precision in Variations of the Fixed-Effects Panel Data Model), from the Università Italo Francese (projects GALILEO 2019 no. G19-48 and GALILEO 2021 no. G21 89), from the Compagnia di San Paolo (project MINIERA no. I34I20000380007), and from the University of Trento (project UNMASKED 2020)
In the present study, matrix perturbation bounds on the eigenvalues and on the invariant subspaces found by principal component analysis is investigated, for the case in which the data matrix on which principal component analysis is performed is a convex combination of two data matrices. The application of the theoretical analysis to multi-objective optimization problems – e.g., those arising in the design of mechanical metamaterial filters – is also discussed, together with possible extensions.
Citation: Giorgio Gnecco, Andrea Bacigalupo. Convex combination of data matrices: PCA perturbation bounds for multi-objective optimal design of mechanical metafilters. Mathematical Foundations of Computing, doi: 10.3934/mfc.2021014
##### References:
show all references
##### References:
(a) Positive eigenvalues $\lambda_i({\bf{G}}(\alpha))$ (green curves, $i = 1,\ldots,5$), their best lower bounds derived from the first inequalities in Eqs. (1a) and (1b) in Proposition 1 (blue curves) with $K = 50$, and their best upper bounds derived from the same inequalities, still with $K = 50$ (red curves); (b) for $K = 1$, $i = 1$, and each $\alpha \in [0,1]$: $\sin(\theta_{1,{\rm min}}(\alpha))$ (green curve), and smallest upper bound on it, based on the second to last inequalities in Eqs. (11a) and (11b) in Proposition 2 (blue curve)
]">Figure 2. Beam lattice metamaterials with viscoelastic resonators and their reference periodic cell [19]
Floquet-Bloch spectrum maximizing a low-frequency band gap of a mechanical metamaterial filter: (a) $3$-dimensional representation; (b) projection of the spectrum onto a vertical plane
Floquet-Bloch spectrum maximizing a high-frequency pass band of a mechanical metamaterial filter: (a) $3$-dimensional representation; (b) projection of the spectrum onto a vertical plane
Floquet-Bloch spectrum maximizing a trade-off between a low-frequency bang gap and a high-frequency pass band of a mechanical metamaterial filter: (a) $3$-dimensional representation; (b) projection of the spectrum onto a vertical plane
[1] Azam Moradi, Jafar Razmi, Reza Babazadeh, Ali Sabbaghnia. An integrated Principal Component Analysis and multi-objective mathematical programming approach to agile supply chain network design under uncertainty. Journal of Industrial & Management Optimization, 2019, 15 (2) : 855-879. doi: 10.3934/jimo.2018074 [2] Yitong Guo, Bingo Wing-Kuen Ling. Principal component analysis with drop rank covariance matrix. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2345-2366. doi: 10.3934/jimo.2020072 [3] Yuan-mei Xia, Xin-min Yang, Ke-quan Zhao. A combined scalarization method for multi-objective optimization problems. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2669-2683. doi: 10.3934/jimo.2020088 [4] Hui Zhang, Jian-Feng Cai, Lizhi Cheng, Jubo Zhu. Strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems & Imaging, 2012, 6 (2) : 357-372. doi: 10.3934/ipi.2012.6.357 [5] Xia Zhao, Jianping Dou. Bi-objective integrated supply chain design with transportation choices: A multi-objective particle swarm optimization. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1263-1288. doi: 10.3934/jimo.2018095 [6] Ankan Bhaumik, Sankar Kumar Roy, Gerhard Wilhelm Weber. Multi-objective linguistic-neutrosophic matrix game and its applications to tourism management. Journal of Dynamics & Games, 2021, 8 (2) : 101-118. doi: 10.3934/jdg.2020031 [7] Han Yang, Jia Yue, Nan-jing Huang. Multi-objective robust cross-market mixed portfolio optimization under hierarchical risk integration. Journal of Industrial & Management Optimization, 2020, 16 (2) : 759-775. doi: 10.3934/jimo.2018177 [8] Shoufeng Ji, Jinhuan Tang, Minghe Sun, Rongjuan Luo. Multi-objective optimization for a combined location-routing-inventory system considering carbon-capped differences. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021051 [9] Qiang Long, Xue Wu, Changzhi Wu. Non-dominated sorting methods for multi-objective optimization: Review and numerical comparison. Journal of Industrial & Management Optimization, 2021, 17 (2) : 1001-1023. doi: 10.3934/jimo.2020009 [10] Min Zhang, Gang Li. Multi-objective optimization algorithm based on improved particle swarm in cloud computing environment. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1413-1426. doi: 10.3934/dcdss.2019097 [11] Liwei Zhang, Jihong Zhang, Yule Zhang. Second-order optimality conditions for cone constrained multi-objective optimization. Journal of Industrial & Management Optimization, 2018, 14 (3) : 1041-1054. doi: 10.3934/jimo.2017089 [12] Danthai Thongphiew, Vira Chankong, Fang-Fang Yin, Q. Jackie Wu. An on-line adaptive radiation therapy system for intensity modulated radiation therapy: An application of multi-objective optimization. Journal of Industrial & Management Optimization, 2008, 4 (3) : 453-475. doi: 10.3934/jimo.2008.4.453 [13] Qingshan You, Qun Wan, Yipeng Liu. A short note on strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems & Imaging, 2013, 7 (1) : 305-306. doi: 10.3934/ipi.2013.7.305 [14] Henri Bonnel, Ngoc Sang Pham. Nonsmooth optimization over the (weakly or properly) Pareto set of a linear-quadratic multi-objective control problem: Explicit optimality conditions. Journal of Industrial & Management Optimization, 2011, 7 (4) : 789-809. doi: 10.3934/jimo.2011.7.789 [15] Lin Jiang, Song Wang. Robust multi-period and multi-objective portfolio selection. Journal of Industrial & Management Optimization, 2021, 17 (2) : 695-709. doi: 10.3934/jimo.2019130 [16] Jian Xiong, Zhongbao Zhou, Ke Tian, Tianjun Liao, Jianmai Shi. A multi-objective approach for weapon selection and planning problems in dynamic environments. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1189-1211. doi: 10.3934/jimo.2016068 [17] Dušan M. Stipanović, Claire J. Tomlin, George Leitmann. A note on monotone approximations of minimum and maximum functions and multi-objective problems. Numerical Algebra, Control & Optimization, 2011, 1 (3) : 487-493. doi: 10.3934/naco.2011.1.487 [18] Hamed Fazlollahtabar, Mohammad Saidi-Mehrabad. Optimizing multi-objective decision making having qualitative evaluation. Journal of Industrial & Management Optimization, 2015, 11 (3) : 747-762. doi: 10.3934/jimo.2015.11.747 [19] Ningyu Sha, Lei Shi, Ming Yan. Fast algorithms for robust principal component analysis with an upper bound on the rank. Inverse Problems & Imaging, 2021, 15 (1) : 109-128. doi: 10.3934/ipi.2020067 [20] Adriel Cheng, Cheng-Chew Lim. Optimizing system-on-chip verifications with multi-objective genetic evolutionary algorithms. Journal of Industrial & Management Optimization, 2014, 10 (2) : 383-396. doi: 10.3934/jimo.2014.10.383
Impact Factor:
## Tools
Article outline
Figures and Tables | 2021-09-18 19:36:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4666244089603424, "perplexity": 8998.06851675656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056572.96/warc/CC-MAIN-20210918184640-20210918214640-00301.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-10-section-10-1-counting-10-1-assess-your-understanding-page-687/22 | ## College Algebra (10th Edition)
In order to count all the elements in set that are either in A, B, or C, we have to add up the numbers that are in the union of the three sets. This means: $n(A\cup B\cup C)=15+10+15+ n(A\cap B) +n(A\cap C) + n(B\cap C) + n(A\cap B\cap C)=52$ | 2019-11-19 10:02:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30307576060295105, "perplexity": 186.51256043715298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670135.29/warc/CC-MAIN-20191119093744-20191119121744-00367.warc.gz"} |
http://defanor.uberspace.net/notes/markup-languages.html | # Markup languages
There is plenty of markup languages around, and often it is not easy to pick one for a task at hand. I am going to put together a few observations here.
And since I'm interested in exporting documents into HTML and Texinfo, those aspects will be mentioned explicitly. Sometimes will assume that quality of non-native export/conversion is rather low.
## 1 Languages
### 1.1 LaTeX
It is the most advanced language of those which I will mention here, and it is great in many aspects, so I will only list its cons.
Cons:
• Awkward (outdated?) language and syntax: though I don't have anything better in mind, of which I would be certain that it would be at least as handy, LaTeX still is awkward as a language.
• PDF/paper-oriented: there is plenty of quirks when trying to export relatively complicated LaTeX document into HTML. Also no native export into Texinfo.
• Complicated: though it's easy to learn the basics, its internals are a mystery to me. That probably won't stop many programmers, but then they could consider usage being complicated.
Use cases: it's great for complex documents, involving diagrams or mathematical formulæ, or for anything that could use templates, but could be excessive in other cases. "LaTeX is the de facto standard for the communication and publication of scientific documents."
### 1.2 Org-mode
That's what I'm using at the moment.
Pros:
• Easy to use: though has a lot of features, it's easy to learn basics, and then only those features which one would actually use.
• HTML export and publishing: very handy and nice.
Cons:
• There is Texinfo export, but it can't export a whole project, as it does with HTML.
• Emacs-based: though it is nice when updating documents alone, it's not that nice for shared documents.
• Awkward syntax for various blocks and properties: while it's nice and handy for basic features, advanced ones do seem awkward to me: long names, which consist of capital letters.
Use cases: all kinds of notes, static websites, probably basic info files.
### 1.3 Texinfo
Pros:
• Export: that's the primary way to create Info files, and GNU uses it for HTML manuals as well. Info, HTML, LaTeX, and a few other export formats are supported natively.
• A GNU project.
Cons:
• Syntax is not great, though better than some others.
Use cases: its primary purpose is to create technical manuals, and it seems to be good at that, so anything manual-alike is what it's good for.
### 1.4 Markdown
And its derivatives, e.g. "GitHub Flavored Markdown". Actually, there's not much to write here: it's simple, which is both good and bad.
Use cases: github, maybe some docstrings and inline code documentation. By itself, it's not much better than textual files, and doesn't even replace those, since it's harder to read as plain text. But HTML export (e.g., using Hakyll/Pandoc) is nice.
### 1.5 reStructuredText and Sphinx
Probably I shouldn't mix those together, but that's what I'm doing.
Pros:
• Export: export into both HTML and Texinfo works fine, and Sphinx also creates handy makefiles.
• Syntax: it's relatively nice once one gets used to it.
Cons:
• reStructuredText is not particularly intuitive.
• Python, including various Python errors on export: at least not PHP or JS, but still.
• "Fancy" HTML with ugly default theme: though it's nice that it can include MathJax or render formulæ in PNG, highlights code in many languages, and has search that doesn't require any active server-side scripts, it's not that nice for accessibility, is full of JS and depends on it, and basically not minimalistic – probably would look even worse in a few years.
Use cases: manuals, documentation.
### 1.6 Others
HTML — when it's used to support semantics and not bloating (i.e., never) — can also be considered a usable markup language.
PostScript is a surprisingly readable and somewhat nice for a language that is usually used as a target to compile other languages (perhaps LaTeX most of the time) into.
## 2 Conclusion
As usual, it's all about preferences, priorities, and tasks. Though it's tempting to pick a single language for everything, it's like with programming languages – there's just no existing solution that would be good for everything (and if one thinks that they've found one, that's probably a "golden hammer"). | 2017-04-25 20:29:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4376928210258484, "perplexity": 3138.71376892676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120878.96/warc/CC-MAIN-20170423031200-00305-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/a-hideous-linear-regression-confidence-set-question.391475/ | # A hideous Linear Regression/confidence set question
1. Apr 1, 2010
### Phillips101
Take the linear model Y=X*beta+e, where e~Nn(0, sigma^2 * I), and it has MLE beta.hat
First, find the distribution of (beta.hat-beta)' * X'*X * (beta.hat-beta), where t' is t transpose. I think I've done this. I think it's a sigma^2 chi-squared (n-p) distribution.
Next, Hence find a (1-a)-level confidence set for beta based on a root with an F distribution. I can't do this to save my life. I'm aware that an F distribution is the ratio of two chi-squareds, but where the hell I'm going to get another chi squared from I have no idea. Also, we're dealing in -vectors- and I don't know how,what,why any confidence set is going to be or even look like, and I've no idea how to even try to get one.
-Any- help would be appreciated. Thanks
2. Apr 2, 2010
Notice that
$$\frac{\hat{\beta}' X'X \hat{\beta}}{\sigma^2}$$
has a $$\Chi^2$$ distribution. however, the variance is unknown, so you need to estimate it (with another expression from the regression). What would you use for the estimate, and what is its distribution?
3. Apr 3, 2010
### Phillips101
Use the MLE sigma2.hat=(1/n)*||Y-Xbeta.hat||^2 ? This is distributed as a chi-squared n-1 variable if I remember correctly...
4. Apr 3, 2010
### Phillips101
If that's correct, then the thing you posted is distributed as an F distribution, which is what I need? And would swapping beta.hat for beta.hat-beta make any difference to this? | 2016-07-23 21:17:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6660377979278564, "perplexity": 1711.4844670456216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823670.44/warc/CC-MAIN-20160723071023-00139-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://enacademic.com/dic.nsf/enwiki/1046120/Actor_model_theory | Actor model theory
In theoretical computer science, Actor model theory concerns theoretical issues for the Actor model.
Actors are the primitives that form the basis of the Actor model of concurrent digital computation. In response to a message that it receives, an Actor can make local decisions, create more Actors, send more messages, and designate how to respond to the next message received. Actor model theory incorporates theories of the events and structures of Actor computations, their proof theory, and denotational models.
Events and their orderings
From the definition of an Actor, it can be seen that numerous events take place: local decisions, creating Actors, sending messages, receiving messages, and designating how to respond to the next message received.
However, this article focuses on just those events that are the arrival of a message sent to an Actor.
:"Law of Countability": There are at most countably many events.
Activation ordering
The activation ordering (-≈→) is a fundamental ordering that models one event activating another (there must be energy flow in the message passing from an event to an event which it activates).
*Because of the transmission of energy, the activation ordering is "relativistically invariant"; that is, for all events e1.e2, if e1 -≈→ e2, then the time of e1 precedes the time of e2 in the relativistic frames of reference of all observers.
*"Law of Strict Causality for the Activation Ordering": For no event does e -≈→ e.
*"Law of Finite Predecession in the Activation Ordering": For all events e1 the set {e|e -≈→ e1} is finite.
Arrival orderings
The arrival ordering of an Actor x ( -x-> ) models the (total) ordering of events in which a message arrives at x. Arrival ordering is determined by "arbitration" in processing messages (often making use of a digital circuit called an arbiter). The arrival events of an Actor are on its world line. The arrival ordering means that the Actor model inherently has indeterminacy (see Indeterminacy in concurrent computation).
*Because all of the events of the arrival ordering of an actor x happen on the world line of x, the arrival ordering of an actor is "relativistically invariant". "I.e.", for all actors x and events e1.e2, if e1 -x→ e2, then the time of e1 precedes the time of e2 in the relativistic frames of reference of all observers.
*"Law of Finite Predecession in Arrival Orderings": For all events e1 and Actors x the set {e|e -x→ e1} is finite.
Combined ordering
The combined ordering (denoted by ) is defined to be the transitive closure of the activation ordering and the arrival orderings of all Actors.
*The combined ordering is relativistically invariant because it is the transitive closure of relativistically invariant orderings. "I.e.", for all events e1.e2, if e1→e2. then the time of e1 precedes the time of e2 in the relativistic frames of reference of all observers.
*"Law of Strict Causality for the Combined Ordering": For no event does e→e.
The combined ordering is obviously transitive by definition.
In [Baker and Hewitt 197?] , it was conjectured that the above laws might entail the following law:
:"Law of Finite Chains Between Events in the Combined Ordering": There are no infinite chains ("i.e.", linearly ordered sets) of events between two events in the combined ordering →.
Independence of the Law of Finite Chains Between Events in the Combined Ordering
However, [Clinger 1981] surprisingly proved that the Law of Finite Chains Between Events in the Combined Ordering is independent of the previous laws, "i.e.",
Theorem. "The Law of Finite Chains Between Events in the Combined Ordering does not follow from the previously stated laws."
Proof. It is sufficient to show that there is an Actor computation that satisfies the previously stated laws but violates the Law of Finite Chains Between Events in the Combined Ordering.
:Consider a computation which begins when an actor "Initial" is sent a Start message causing it to take the following actions:#Create a new actor "Greeter1" which is sent the message SayHelloTo with the address of "Greeter1":#Send "Initial" the message Again with the address of "Greeter1"
:Thereafter the behavior of "Initial" is as follows on receipt of an Again message with address "Greeteri" (which we will call the event Againi)::#Create a new actor "Greeteri+1" which is sent the message SayHelloTo with address "Greeteri":#Send "Initial" the message Again with the address of "Greeteri+1":Obviously the computation of "Initial" sending itself Again messages never terminates.
:The behavior of each Actor "Greeteri" is as follows::*When it receives a message SayHelloTo with address "Greeteri-1" (which we will call the event SayHelloToi), it sends a Hello message to "Greeteri-1":*When it receives a Hello message (which we will call the event Helloi), it does nothing.:Now it is possible that Helloi -"Greeteri"SayHelloToi every time and therefore HelloiSayHelloToi.:Also Againi -≈→ Againi+1 every time and therefore AgainiAgaini+1.
:Furthermore all of the laws stated before the Law of Strict Causality for the Combined Ordering are satisfied.:However, there may be an infinite number of events in the combined ordering between Again1 and SayHelloTo1 as follows::Again1→...→Againi→...$infty$...→HelloiSayHelloToi→...→Hello1SayHelloTo1
However, we know from physics that infinite energy cannot be expended along a finite trajectory (see for example Quantum information and relativity theory). Therefore, since the Actor model is based on physics, the Law of Finite Chains Between Events in the Combined Ordering was taken as an axiom of the Actor model.
Law of Discreteness
The Law of Finite Chains Between Events in the Combined Ordering is closely related to the following law: :"Law of Discreteness": For all events e1 and e2, the set {e|e1→e→e2} is finite.
In fact the previous two laws have been shown to be equivalent:
:Theorem [Clinger 1981] . "The Law of Discreteness is equivalent to the Law of Finite Chains Between Events in the Combined Ordering" (without using the axiom of choice.)
The law of discreteness rules out Zeno machines and is related to results on Petri nets [Best "et. al." 1984, 1987] .
The Law of Discreteness implies the property of unbounded nondeterminism. The combined ordering is used by [Clinger 1981] in the construction of a denotational model of Actors (see denotational semantics).
Denotational semantics
Clinger [1981] used the Actor event model described above to construct a denotational model for Actors using power domains. Subsequently Hewitt [2006] augmented the diagrams with arrival times to construct a technically simpler denotational model that is easier to understand.
ee also
*Actor model early history
*Actor model and process calculi
*Actor model implementation
References
*Carl Hewitt, " et al." Actor Induction and Meta-evaluation Conference Record of ACM Symposium on Principles of Programming Languages, January 1974.
*Irene Greif. Semantics of Communicating Parallel Processes MIT EECS Doctoral Dissertation. August 1975.
*Edsger Dijkstra. A discipline of programming Prentice Hall. 1976.
*Carl Hewitt and Henry Baker Actors and Continuous Functionals Proceeding of IFIP Working Conference on Formal Description of Programming Concepts. August 1-5, 1977.
*Henry Baker and Carl Hewitt The Incremental Garbage Collection of Processes Proceeding of the Symposium on Artificial Intelligence Programming Languages. SIGPLAN Notices 12, August 1977.
*Carl Hewitt and Henry Baker Laws for Communicating Parallel Processes IFIP-77, August 1977.
*Aki Yonezawa Specification and Verification Techniques for Parallel Programs Based on Message Passing Semantics MIT EECS Doctoral Dissertation. December 1977.
*Peter Bishop Very Large Address Space Modularly Extensible Computer Systems MIT EECS Doctoral Dissertation. June 1977.
*Carl Hewitt. Viewing Control Structures as Patterns of Passing Messages Journal of Artificial Intelligence. June 1977.
*Henry Baker. Actor Systems for Real-Time Computation MIT EECS Doctoral Dissertation. January 1978.
*Carl Hewitt and Russ Atkinson. Specification and Proof Techniques for Serializers IEEE Journal on Software Engineering. January 1979.
*Carl Hewitt, Beppe Attardi, and Henry Lieberman. Delegation in Message Passing Proceedings of First International Conference on Distributed Systems Huntsville, AL. October 1979.
*Russ Atkinson. Automatic Verification of Serializers MIT Doctoral Dissertation. June, 1980.
*Bill Kornfeld and Carl Hewitt. The Scientific Community Metaphor IEEE Transactions on Systems, Man, and Cybernetics. January 1981.
*Gerry Barber. Reasoning about Change in Knowledgeable Office Systems MIT EECS Doctoral Dissertation. August 1981.
*Bill Kornfeld. Parallelism in Problem Solving MIT EECS Doctoral Dissertation. August 1981.
*Will Clinger. Foundations of Actor Semantics MIT Mathematics Doctoral Dissertation. June 1981.
*Eike Best. Concurrent Behaviour: Sequences, Processes and Axioms Lecture Notes in Computer Science Vol.197 1984.
*Gul Agha. [http://hdl.handle.net/1721.1/6952 Actors: A Model of Concurrent Computation in Distributed Systems] Doctoral Dissertation. 1986.
*Eike Best and R.Devillers. Sequential and Concurrent Behaviour in Petri Net Theory Theoretical Computer Science Vol.55/1. 1987.
*Gul Agha, Ian Mason, Scott Smith, and Carolyn Talcott. A Foundation for Actor Computation Journal of Functional Programming January 1993.
*Satoshi Matsuoka and Akinori Yonezawa. Analysis of inheritance anomaly in object-oriented concurrent programming languages in Research directions in concurrent object-oriented programming. 1993.
*Jayadev Misra. A Logic for concurrent programming: Safety Journal of Computer Software Engineering. 1995.
*Luca de Alfaro, Zohar Manna, Henry Sipma and Tomás Uribe. Visual Verification of Reactive Systems TACAS 1997.
*Thati, Prasanna, Carolyn Talcott, and Gul Agha. Techniques for Executing and Reasoning About Specification Diagrams International Conference on Algebraic Methodology and Software Technology (AMAST), 2004.
*Giuseppe Milicia and Vladimiro Sassone. The Inheritance Anomaly: Ten Years After Proceedings of the 2004 ACM Symposium on Applied Computing (SAC), Nicosia, Cyprus, March 14-17, 2004.
*Petrus Potgieter. [http://arxiv.org/abs/cs/0412022 Zeno machines and hypercomputation] 2005
*Carl Hewitt [http://www.pcs.usp.br/~coin-aamas06/10_commitment-43_16pages.pdf What is Commitment?Physical, Organizational, and Social] COINS@AAMAS. 2006.
Wikimedia Foundation. 2010.
Look at other dictionaries:
• Actor model — In computer science, the Actor model is a mathematical model of concurrent computation that treats actors as the universal primitives of concurrent digital computation: in response to a message that it receives, an actor can make local decisions … Wikipedia
• Actor model and process calculi — In computer science, the Actor model and process calculi are two closely related approaches to the modelling of concurrent digital computation. See Actor model and process calculi history.There are many similarities between the two approaches,… … Wikipedia
• Actor model and process calculi history — The Actor model and process calculi share an interesting history and co evolution.Early workThe Actor model, first published in 1973, [Carl Hewitt, Peter Bishop and Richard Steiger. A Universal Modular Actor Formalism for Artificial Intelligence… … Wikipedia
• Actor model implementation — In computer science, Actor model implementation concerns implementation issues for the Actor model. Cosmic CubeThe Cosmic Cube was developed by Chuck Seitz et al. at Caltech providing architectural support for Actor systems. A significant… … Wikipedia
• Actor model later history — In computer science, the Actor model, first published in 1973 ref harvard|Hewitt|Hewitt et al. 1973| , is a mathematical model of concurrent computation. This article reports on the later history of the Actor model in which major themes were… … Wikipedia
• Actor model middle history — In computer science, the Actor model, first published in 1973 ref harvard|Hewitt|Hewitt et al. 1973| , is a mathematical model of concurrent computation. This article reports on the middle history of the Actor model in which major themes were… … Wikipedia
• History of the Actor model — In computer science, the Actor model, first published in 1973 ref harvard|Hewitt|Hewitt et al. 1973| , is a mathematical model of concurrent computation. Many fundamental issues were discussed and debated in the early history of the Actor model.… … Wikipedia
• Denotational semantics of the Actor model — The denotational semantics of the Actor model is the subject of denotational domain theory for Actors. The historical development of this subject is recounted in [Hewitt 2008b]. Contents 1 Actor fixed point semantics 2 Compositionality in… … Wikipedia
• Actor (disambiguation) — An actor is a person who plays a role in theater, cinema or television.Actor can also mean: * Actants, also called actors, is, in actor network theory (a general theory of sociological behaviour), the one who performs the act. * In Interactions… … Wikipedia
• Model of Hierarchical Complexity — The model of hierarchical complexity is a framework for scoring how complex a behavior is. It quantifies the order of hierarchical complexity of a task based on mathematical principles of how the information is organized and of information… … Wikipedia | 2021-01-18 05:25:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3412359058856964, "perplexity": 4290.474433200449}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514121.8/warc/CC-MAIN-20210118030549-20210118060549-00642.warc.gz"} |
https://www.rieti.go.jp/en/china/14050201.html | China in Transition
# Is “China as Number 1” Ultimately an Illusion? — China's GDP surpassing that of the United States is simply a matter of time
Chi Hung KWAN
Consulting Fellow, RIETI
In China as Number 1 (Toyo Keizai Inc.) published in 2009, I compared the strength of the Chinese economy with that of other economic powers such as the United States and Japan in terms of major indicators including gross domestic product (GDP), trade volume, foreign exchange reserves, and automobile and steel production, and predicted that China's GDP would overtake that of the United States to become the largest in the world by 2026. There were many objections to this view from the beginning, and, more recently, with the economic growth in China slowing significantly, the argument that “China as Number 1” will end up being an illusion has become even more prominent. However, given that China is still growing much faster than the major countries, the presence of China has continued to increase, and the era of “China as Number 1” is drawing closer and closer. Based on the criterion of whether China's GDP will eventually top that of the United States, the probability of “China as Number 1” ending up as an illusion is very low.
## Pessimism unfounded
Toshiya Tsugami, a consultant specializing in modern China and a former official of the Ministry of Economy, Trade and Industry (Director of the Northeast Asia Division, the Trade Policy Bureau), is the most active person in developing a pessimistic view on the future of the Chinese economy. In his book, Chugoku Taito no Shuen (End of China's Rise) (Nikkei Publishing Inc., 2013), Mr. Tsugami concluded that the day when China's GDP will overtake the United States and become the largest in the world will never come, as its growth rate from now on will be around 5% at best. As the basis for this argument, he cites the following short-, medium-, and long-term problems facing China.
First, as the short-term problem, public spending in response to the collapse of Lehman Brothers, including the economic stimulus measures that amounted to four trillion yuan, has created excessive production capacity, potentially leading to an increase in bad debt.
For the medium-term problem, upward pressure on wages is rising in China in the wake of the arrival of the “Lewisian turning point,” which signals the drying up of excess labor in rural areas. To solve this problem, China needs to enhance productivity through deregulation and privatization. In reality, however, the state-owned sector is expanding at the expense of private-sector companies, while the dual structure of urban and rural areas has prevented farmers from moving to cities and pushed labor costs even higher.
Finally, regarding the long-term problem, China has to face the problems associated with population aging and declining birth rate before it reaches the stage of developed countries.
It is true that these factors are restraining growth in the Chinese economy, but it is still too early to conclude that the rise of China will come to an end any time soon and that “China as Number 1” will end up being an illusion. With a per capita GDP of only $6,747 in 2013, much lower than those of developed nations, such as the United States ($53,101) and Japan ($38,491) (both numbers are based on data from the International Monetary Fund, World Economic Outlook Database, April 2014, which include estimates), China still enjoys significant advantage of being a latecomer, which provides abundant room for industrial advances and technology transfers from overseas (note). ## Timing at which China's GDP will surpass that of the United States China's GDP stood at$4.52 trillion, a size equal to a mere 30.7% of that of the United States and 93.2% of that of Japan in 2008. However, China surpassed Japan to become the second largest economy in the world after the United States in 2010 in terms of GDP, which reached $9.18 trillion, equivalent to 1.87 times that of Japan and 54.6% that of the United States in 2013 (Figure 1). This reflects the fact that the economic growth rate in China is still much higher than that of major countries such as the United States and Japan, although it has been declining in recent years (Figure 2). The appreciation of the yuan against the dollar and the yen has also contributed to the surge in China's GDP relative to the United States and Japan. If we estimate the timing when the GDP figures for the United States and China will trade places based on those in 2013 using three scenarios—optimistic, standard, pessimistic—taking into account the growth rates of China and the United States and changes in the exchange rate of the yuan against the dollar (more precisely, the real exchange rate that takes into consideration changes in the relative price level between the two countries), we project that China's GDP will surpass that of the United States in 2021 in the optimistic scenario, 2024 in the standard scenario, and 2077 in the pessimistic scenario (Figure 3). PeriodOptimisticStandardPessimistic GDP growthChina2014-20208%7%6% 2021-20306%5%4% 2031 -5%4%3% U.S.2014 -2.5% RMB's appreciation against the U.S. dollar in real terms2014 -3%2%0% Source: Compiled by the author based on official statistics of the United States and China in 2013. ## Unshaken position of China as a global economic power Even before China's GDP surpasses that of the United States, China's contribution to the growth rate of the world economy has already exceeded that of the United States, reaching 1.1% in 2013, which is equivalent to 36.7% of global growth (3.0%) (Figure 4). The contribution of a country to the growth rate of the world economy is calculated by multiplying the growth rate of the country by its share of total world GDP (based on purchasing power parity). Although China's growth rate has declined recently, its contribution to the global growth rate has remained high, thanks to its rising share of world GDP. Looking at the issue from an industrial level, Chinese crude steel production reached 779 million tonnes in 2013, maintaining the leading position since 1996 and leaving Japan, which has the second largest steel production (111 million tonnes), far behind (Table 1). In addition, auto production in China stood at 22.12 million units in 2013, twice that of the United States (11.05 million units) (Table 2). 2008 (Million Tonnes) 550 779 91 87 119 111 Source: Compiled by the author based on World Steel Association 2008 (Million units) 9.35 22.12 8.68 11.05 11.56 9.63 Source: Compiled by the author based on data from the China Association of Automobile Manufacturers for China and the automobile statistics monthly of the japan automobile manufacturers association for japan and the united states. In terms of trade, China is now the world's largest exporter and second largest importer after the United States. Totaling exports and imports, China became the world's largest trading power in 2013, overtaking the United States for the first time (Table 3). Exports Imports Total exports and imports Rank Country ($ billion) China 2,210 United States 2,331 China 4,160 United States 1,579 China 1,950 United States 3,910 Germany 1,453 Germany 1,187 Germany 2,640 Japan 715 Japan 833 Japan 1,548 Netherlands 664 France 681 France 1,261 France 580 United Kingdom 654 Netherlands 1,254 Note: In 1978, total exports and imports of China were $20.64 billion, ranked 27th in the world.Source: Compiled by the author based on WTO data In the financial arena, China's net external assets have been increasing rapidly, reaching$1.97 trillion at the end of 2013 (the State Administration of Foreign Exchange, “The Report of China's International Balance of Payments 2013”), the second largest after Japan with $3.11 trillion (a preliminary estimate by Japan's Ministry of Finance). In particular, China's foreign exchange reserves were$3.88 trillion, about three times that of Japan (\$1.27 trillion). Adding in the expected progress in the liberalization of capital transactions in China, overseas investment by private-sector companies, in addition to the existing investment of foreign exchange reserves by the government, is expected to become more and more active. In the future, the influence of China on the world economy is likely to continue to increase not only in the real economy but also in the realm of finance.
The original text in Japanese was posted on May 2, 2014.
Footnote(s)
• ^ In contrast to Mr. Tsugami, Professor Justin Yifu Lin of Peking University, who is one of China's leading economists and served as chief economist at the World Bank between 2008 and 2012, emphasizes the latecomer advantage and argues that China has the potential to maintain an economic growth rate of 8% into the future (see “Justin Yifu Lin Answers Questions: Main Points of New Structural Economics” on FTchinese.com, October 25, 2012, and “The Potential for Economic Growth in China” on FTchinese.com, August 28, 2013). However, Professor Lin appears to be too optimistic, given that he does not take fully into account the structural problems facing China, such as changes in its demographics.
Related articles
May 2, 2014
## Article(s) by this author
• ### China Aims for Carbon Neutrality—Decarbonization of Energy and Industrial Structures is the Key
October 6, 2021[China in Transition]
• ### Challenges for the Chinese Economy as Viewed through the 2020 Population Census—Focusing on a Declining Labor Force and Inter-Regional Migration
July 20, 2021[China in Transition]
• ### The Outlook for the Chinese Economy in 2021—Can China Achieve Double-Digit Growth for the First Time in 11 Years?
April 5, 2021[China in Transition]
• ### Deep-rooted Causes behind the China-U.S. Friction—Similarities to and Differences from the Japan-U.S. Friction
February 26, 2021[China in Transition]
• ### Will the Arrival of a Biden Administration Lead to a Better U.S.-China Relationship?—Toward Cooperative Rivalry
January 13, 2021[China in Transition] | 2022-01-18 19:28:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22528494894504547, "perplexity": 3621.4451847917053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00021.warc.gz"} |
https://svn.haxx.se/tsvn/archive-2005-08/0879.shtml | # Re: [TSVN] 'not a working copy' when deleting root working folder
From: Simon Large <simon_at_skirridsystems.co.uk>
Date: 2005-08-25 10:14:41 CEST
Mark Sanford wrote:
> 1) Create a folder "C:\source\test_working_copy"
> 2) Do a "SVN Checkout..." on this folder to get a
> working copy.
> 3) Right click on this folder and do a "TortoiseSVN >
> Delete"
> 4) I get a message "'C:\source' is not a working copy"
>
> This is not what I expected, which was: delete the
> folder. Note the folder in the error message is not
> the folder I was trying to delete. I am able to
> "TortoiseSVN > Delete" folders within
> c:\source\test_working_copy, just not the root working
> copy folder. Everything else (commit, revert, etc.)
> seems to work fine.
What were you trying to achieve?
I am guessing you just wanted to delete the local working copy. If that
is the case, just use a normal Windows delete instead. The working copy
is entirely contained within its top level folder. Checking out a
working copy leaves no mark on the repository, so when you have finished
with it you just discard it. BTW, if you move it to the recycle bin, you
should empty the recycle bin sometime soon. The .svn folders contain
lots of small files which really hit recycle performance. Alternatively,
shift-delete and bypass the recycler entirely.
If you use "TortoiseSVN > Delete" you are telling Subversion that you
want to remove that file/folder from version control. If you want to do
that, you have to do it from the level above (because the parent folder
contains the control file for the folder you are deleting). Or as Tatham
Oddie suggested, use repo browser and delete it directly in the repository.
Simon
```--
___
oo // \\ "De Chelonian Mobile"
(_,\/ \_/ \ TortoiseSVN
\ \_/_\_/> The coolest Interface to (Sub)Version Control
/_/ \_\ http://tortoisesvn.tigris.org
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tortoisesvn.tigris.org | 2021-10-28 12:24:56 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8947541117668152, "perplexity": 12540.459573573897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00703.warc.gz"} |
https://math.stackexchange.com/questions/639555/limit-of-a-sequence-defined-by-a-n1-left-sqrtbb-righta-n-with-a | # Limit of a sequence defined by $a_{n+1}=\left(\sqrt[b]{b}\right)^{a_n}$ with $a_0=\sqrt[b]{b}$
Let $b\in[1,e]$ and define $$a_{n+1}=\left(\sqrt[b]{b}\right)^{a_n}$$ with $a_0=\sqrt[b]{b}$. Show that the sequence is convergent and find the limit.
One can show that $(a_n)$ is increasing. To show it is convergent, it suffices to give a bound for $(a_n)$. I get stuck here and I don't see how to go on. Some manipulation with the formula give $$a_{n+1}=\exp\left(a_n\log\sqrt[b]{b}\right)=\exp\left(a_n\log\left(e^{\frac{1}{b}\log b}\right)\right)=\exp\left(a_n\frac{\log b}{b}\right),$$ which doesn't seem to be of much help.
• I obtained that limit is equal to $b$, but can't prove it without a pen. – user98186 Mar 1 '16 at 21:26
You're almost there. Note that $(\log b)/b \leq 1/e$ so $\log a_0 \leq 1/e$ and $a_{n+1} \leq \exp(a_n/e)$. Then use induction. | 2019-11-15 23:08:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824733734130859, "perplexity": 62.14284066253051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.22/warc/CC-MAIN-20191115222436-20191116010436-00197.warc.gz"} |
https://codeforces.com/problemset/problem/594/C | C. Edo and Magnets
time limit per test
1 second
memory limit per test
256 megabytes
input
standard input
output
standard output
Edo has got a collection of n refrigerator magnets!
He decided to buy a refrigerator and hang the magnets on the door. The shop can make the refrigerator with any size of the door that meets the following restrictions: the refrigerator door must be rectangle, and both the length and the width of the door must be positive integers.
Edo figured out how he wants to place the magnets on the refrigerator. He introduced a system of coordinates on the plane, where each magnet is represented as a rectangle with sides parallel to the coordinate axes.
Now he wants to remove no more than k magnets (he may choose to keep all of them) and attach all remaining magnets to the refrigerator door, and the area of the door should be as small as possible. A magnet is considered to be attached to the refrigerator door if its center lies on the door or on its boundary. The relative positions of all the remaining magnets must correspond to the plan.
Let us explain the last two sentences. Let's suppose we want to hang two magnets on the refrigerator. If the magnet in the plan has coordinates of the lower left corner ( x 1, y 1) and the upper right corner ( x 2, y 2), then its center is located at (, ) (may not be integers). By saying the relative position should correspond to the plan we mean that the only available operation is translation, i.e. the vector connecting the centers of two magnets in the original plan, must be equal to the vector connecting the centers of these two magnets on the refrigerator.
The sides of the refrigerator door must also be parallel to coordinate axes.
Input
The first line contains two integers n and k (1 ≤ n ≤ 100 000, 0 ≤ k ≤ min(10, n - 1)) — the number of magnets that Edo has and the maximum number of magnets Edo may not place on the refrigerator.
Next n lines describe the initial plan of placing magnets. Each line contains four integers x 1, y 1, x 2, y 2 (1 ≤ x 1 < x 2 ≤ 109, 1 ≤ y 1 < y 2 ≤ 109) — the coordinates of the lower left and upper right corners of the current magnet. The magnets can partially overlap or even fully coincide.
Output
Print a single integer — the minimum area of the door of refrigerator, which can be used to place at least n - k magnets, preserving the relative positions.
Examples
Input
3 11 1 2 22 2 3 33 3 4 4
Output
1
Input
4 11 1 2 21 9 2 109 9 10 109 1 10 2
Output
64
Input
3 01 1 2 21 1 1000000000 10000000001 3 8 12
Output
249999999000000001
Note
In the first test sample it is optimal to remove either the first or the third magnet. If we remove the first magnet, the centers of two others will lie at points (2.5, 2.5) and (3.5, 3.5). Thus, it is enough to buy a fridge with door width 1 and door height 1, the area of the door also equals one, correspondingly.
In the second test sample it doesn't matter which magnet to remove, the answer will not change — we need a fridge with door width 8 and door height 8.
In the third sample you cannot remove anything as k = 0. | 2020-08-03 11:32:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39673104882240295, "perplexity": 353.74306810321116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735810.18/warc/CC-MAIN-20200803111838-20200803141838-00143.warc.gz"} |
https://www.yumpu.com/en/document/view/42202026/its-the-farm-book-low-resolution-lets-re-make | # It's The Farm Book (low resolution) - Let's Re-Make!
It's The Farm Book (low resolution) - Let's Re-Make!
s*r t=,#rfi
-''rrunre rercIt's a blessing to wake up to this level of consciousness the first time in anycondition whatsoever. It's a blessing to realize that none of the stuff that'shappened to you has changed you or harmed you or hurt you in any permanentway whatsoever. Once you understand the unsulliable nature of the intellect, it'sno longer necessary to seek absolution for past sins. Dig that? That's a powerfulspell. Anybody who understands that can be absolved in the here and now.DKfrGGQQG@,4,I don't have an ultimate goal in life. I believe in the vow of the Bodhisattva.And that says that sentient beings are numberless, I vow to save them all. Thedeluding passions are inexhaustible, I vow to extinguish them all. The way of thedharma is impossible to expound, I vow to expound it. It is impossible to attainthe way of the Buddha, I vow to attain it. And that keeps you busy. Don,t behung up about beginnings and ends. All this was here when I got here, man,that's all I can tell you, it was here when I got here.You have to work on getting higher. It's like the big condors that fly in thetfiermals. The Universe is going to come along and bang you around a little bit andbring you down once in a while, so you just look around for good vibes and risein them. when you blunder out of the good vibes into the bad vibes, try to begraceful about it until you find a place where you can rise a little. But if yoqkeepwanting to get higher and you keep wanting to get higher . . . There's a thing aboutthat, invented by suzuki-roshi, which is the theory of minimum desire. And hesays you should desire desirelessness until the desire for desirelessness becomes adesire, and then you better level off about there. Desiring altitude is what makesthe come-downs on the other side seem so heavy. Say, "I do not seek afterenlightenment; neither do I linger in a place where no enlightenment exists."AU{'94DD
he n;rture of thc uirrvurs,: is that it wantsto do what you wani" so inncli so that ifyou're afraid it ain't doing i-i, it gives I'onthat. It's like if you want something, the universewill give you what you want. And as iong as youwant things, yolr're going to keep getting what youwant. But the ',rniverse also has something veryprecious about which you may not know. Whenyou quit wanting things, then you find out whatthere is. That's what they mean by casting loose ofyour desires. The universe is really subtle, man,it's just right in there trying to give you everything,just trying to give you everything, and all yo'.r gotto do is quit messing up its systems enough so itcan do it and fix you up. That's what happened iome, ever since I put my faith in the ilniverse, whichI did some years ago. One night in San F'raucisco Isaid, "Looky here, I've been doing ttris s;riritr.ralteaching for all you folks for so iong that I ;iin'tgot time to do anything else anymore. And i'mgoing to not try to do anything else, I'm going todo that full-time, and if i get taken care of, groo\y,and if I don't, well. I'11 have to go io another townand start over again." And since that time, since Icast myself adrift. my cup mnneth over, you know,everything is working out.
t"-'**'*':t"1::12,l'''t,'ai|"We live in a community of six hundred people on aseventeen-hundred-acre farm in Tennessee. But that's notthe first thing we do. What that is is right vocation that wewanted to have a way to make our living, because we were aChurch and wanted to live a spiritual life. If you reallywant to be spiritual, you don't want to have to sell yoursoul for eight hours a day in order to have sixteen hours inwhich to eat and sleep and get it back together again.You'd like it that your work should be seamless with yourlife and that what you do for a living doesn't deny everythingelse you believe in.We be spiritual, and that means that we believe thatevents take place coming from the spiritual plane towardthe material plane, which is to say if you want to influencethe material plane at all that you have to start from a spiritualplace. And if you start from a material place to movethe material plane, you run into action and reaction, entropy,things running down. But if you move from a spiritualplace, you can do things that get done and stay done.You can make changes that stay changed.Eight or ten years ago a lot ofus went to San Franciscobecause the word was out that something spiritual was happeningthere. A lot ofus went there to look at it and see ifit was real and see what was happening. In the course ofthat I had what I have to at this time call revelations. Ididn't used to call them that-I used to call them trips.But eight years later I'm looking back at them, and theystill look heavy, and they look like something really didhappen, and they said something about how it works. SoI went and started an experimental college class with abunch of folks, and at first we were like a research instrument,and we read all the books we could read on the Tarotand the I Ching and yoga and Zen and fairy tales and sciencefiction and extra-sensory perception, and a whole areaof stuff that suddenly looked like it had juice in it thatdidn't look that way before. Like when I was in highschool, the universe was wrapped up. They knew howmany elements there were, and they said it was all a material-planetrip, and there was nobody coming aroundbeing telepathic and really heavy and really stoned. Folksjust believed it was materialistic ror a long time. And Iwatched it change.
.\After we were there for a while, we met somebody whosaid they'd lend us a thousand acres, and we needed to landsomewhere, so we thought that was a good deal. We camedown from Nashville to Lewis County, which is just theboondocks of Tennessee, and drove in off the main roadoffthe interstate, off the fourlane, off the two-lane ontothe dirt, off the dirt across the back of this farmer's cornfield,down through the woods into a little one-acre clearingin the middle of that thousand acres, which was all blackjackoak. And then it turned out that there was a feudgoing on between the cat whose cornfield we just cameacross and the cat who was overseeing the land. That roadgot closed. We had to stay in the middie of that place untilwe hired a bulldozer to come in and build another roadback out, to find our way out again. But what we did waswe didn't get into a hassle with that cat about that road.We just said, "Okay, we'll make another road.".:-:T3.]:]11l.f t::.]:'{}}:*&*.Wthatwedidthisthinggoingalong,beatniksdoingathine. and thrt You can do that 1oo.*s ${"nue.*rary." So we hung out there for a while. Well, it wentalong and all the county dug us, and then we found that aneighboring farm was for sale. It was like a poker game.They put out a quarter, we covered that quarter and boughtthe next farm. And it was about twice as big as any of theones we'd looked at. And it was exactly what we wanted.It was just perfect. It had about five springs, a hundred andfifty acres you could piant on, a thousand and fourteenxi.-We stayed on that place for a few months, began to slowdown from caravaning and find out what it was like to bestable human beings on the ground again. We'd been on theroad seven months. People kept saying. "Oh, I get it, let'sall live in a caravan." We'd say, "No, no, man, that's notwhere it's at." We were like a large organism in the vein ofsociety, and if there were many more organisms like that,society might take some sort of penicillin for it or something.But whiie we were there, we got to know the peoplearound us, and it was a temporary piace, so nobody wasuptight that we were there. See, this is how we had toland. It was like coming in from outer space to land theCaravan. And we could stop wheh we were just looking,because everyboiiy said, "They're just passing through.That's cool." So when we got down to this place, it was atemporary place. Everybody said, "Weli, okay, it's tempo-acres in ail. It was a really pretty place, and we bought itfor seventy dollars an acre. You can't get a kilo for seventydollars, can you? You can still get an acre of dirt for that.And you can live on an acre of dirt. We bought a thousandacres for seventy dollars an acre, and we've been on thatfarm for over two years now.Why I'm bothering to tell you all this is so youknow the changes we went through getting to thisplace so you can see that we didn't get sponsoredby the Ford Foundation or anything like that andi:i:l,{J$ .s.\:". sf.*u i
h feels to me that if we're going to do it, we're going tohnve to hove all hands on board. You can,t say the boatwill float better if you throw somebody over the side. Andif we're going to have all hands on board, then we betterstart getting introduced to each other, so we can get theship afloat. One of the religions we believe in is MahayanaBuddhism. Thqt's the vaiety of Buddhism that says there,sno final and perfect enlightenment until everybody is e4-lightened. And the closest you can get to it is to ftgure thatout. And when you figure that out, there ain,t nothing todo but hustle until we get everybody off.Religion starts heart to heart, mind to mind, eye to eye,between real people. Some people talk about fancy tripswhere they went to fancy places, but the fanciest placesI've seen were in somebody else's eyes. And the neateststoff I ever saw was in somebody else's eyes. You can lookin somebody.else's eyes and you can find truth right there.Truth doesn't have a brand name on it like Pepsicola orCoca-cola;rit's like water-runs in every creek and falls outof the sky.ff*?;Frr,*W,r*4':. tAnd religion is like water. The way you can check it outis the same way as water. If it freezes at thirty-two degreesand if it boils at two hundred and twelve and all that kindof thing, then it's water. And that's the way religion is. Ifreligion is compassionate and if it excludes nobody and if itdoesn't cost money and if it really helps you out in the hereand now, that's how you can tell religion-real religion. I'veseen psychedelic fancies on people fancier than the fanciestrock and roll poster you ever saw. I've seen auras on peopleand rainbows coming off the tops of people's heads, andlots of stuff, but I don't believe in any of that fancy sunless there's compassion and good sense present. That'swhat I found out through atl that tripping was that when itcomes right down to a rock bottom place, what,s real iscompassion.I don't believe in too much religious paraphernalia. Ithink that the most important thing to understand aboutthe life force energies is that you can move them with yourmind-and that you don't need a Tarot deck or a Ouijaboard to connect with that stuff, and that you can do realhealing and teaching things, and that there is so much evidenceat hand to be seen now that I don't think it's necessaryto make what you call a leap of faith. I think that theevidence is manifest right now. And even if you're still selfindulgentin ways, you really know it's there anyway. Andeverybody really knows it.I don't think a spiritual initiation is something you paythirty-five dollars for, and I think that any teacher thatcharges money is a fake, because spiritual teaching is forfree or it ain't real. And I don't mess with palmistry orastrology or the I Ching or any oraclesor divinations, because complicatedsystems of magic have nothing to dowith the spiritual plane other than tohelp you remember that that plane isthere. That plane exists independently,and if you be of good will andlove God and love your neighbor asyourself, you can inhabit the spiritualplane. And that's grace, which comesfrom being pure in heart.There's a religion which is perfectand true and has no errors in it, and allman-made religions are attempts tocopy that religion. And it exists unwrittenfor all these billions of years.And you can tgll the people who knowabout that religion, because it worksfor them in the here and now, and theylook sane and healthy. You can findpeople who practice that religion anyplace, and you can tell who they are,because they look together and they'refriendly and they're saneand they're functional and they,reactually able to do things.We don't be into a lot of visible ceremony. Ceremonyrots away so quick. We think that to pray is an Englishfour-letter word that means to communicate with telepathically-andthat you pray with your fellow man and in thepresence of God, and that you be in telepathic communic_ationif you know how to do it, and that part of being spiritualis clearing your heart and mind so you can do that andknow it's happening.
I have a whole new set of sensors that I've acquired inthese last seven or eight years, and I feel a lot of stuff Inever felt before. And it's funny, because it's the mostcommon and necessary ingredient of life, and at the sametime it's the most outrageouscience-fictiony mind-blowerthat ever happened to me, and it's that you can feel lifeforce with your own equipment. You don't need dials orgauges-you can feel life force. They call it Holy Spirit, andit runs real high like at baby happenings.All this stuff is based on the thing that God is real, andthat what you do counts, and that what goes on insideyour head and in your heart ofhearts makes the difference,and that if you be nice on the outside and nasty on theinside, you get nasty karma, and if you be straight all theway througlr, you get straight karma. Most folks don'tknow anything about karma, because their cause and effectis so muddled that they can't tell for sure what they dothat makes things happen. Sometimes folks like that arethe ones that straighten up in a car wreck or something,because something like that will show them a karmic chainthat they never saw before.But you ought to straighten easier than that. You oughtto pay attention to karma and cause and effect so you canlearn what's happening at liglrter-weight karma than havingto wait till something like that wakes you up. That makeany sense?I'm trying to tell you what I'm doing-why I'm doing itat all-and about a level of consciousness. Mylevel of consciousnesswhen I'm undisturbed is like a baby's, and I likeit that way. I like to hang out with brand new babies whenthey're just born, because I can let go to their vibe and feela lot of peace. And I can do that by mySelf. Ijust go outto the wooCs and sit, or I can smoke a joint-I can do that alot of ways and go to that peaceful place. hrt as soon as Igo to that peaceful place and then get peaceful, I find outthat what I have to do is get up and go back out and starttaking care of business again, because there's a lot offolksthat still need help and I can't rest yet.See, this is all a commercial tor God.
Most of what we grow is just what the neighborsWe've found out that if they don't grow it, itdoesn't grow so well. We grow a whole lot of sweettoes because they're a Tennessee staple. We grow okraa lot of sweet corn, We grow a lot of peas in thespring and green snap beans in the summer. Both areto grow and can be followed by other vegetables. lt,s aeasieraising a few acres of bush beans or peas than anyou'd have to stake. We grow snap beans and snow peascause they don't need shelling.We harvested seventeen tons of tomatoes last yeareleven thousand plants. This year we have twiceamount. That many tomatoes alone was worth the ficial investment we put into farming last year. We,retracted for ten acres of pimiento peppers and also have Iof excess tomatoes to sell. Eventually, as we save moremore money on seed and fertilizer, the whole farmingation should pay for itself.We save our big seed-grains, beans, peanuts, corn,toes, sweet potatoes and so on-because they,re theexpensive. But small seed is cheaper to buy than towith. lt costs six dollars for enough tomato seed toduce seventeen tons of fruit. We've also learned thatties recommended by the state usually do best.We planted our first potatoes this spring, and as weour own seed we'll plant maybe five times that. Freshpotatoes are lar out, and you can get in two crops a yearhere. We hook a tool bar on back of one of our rractorswith "bull-tongue" cultivator shovelset 36,, apart to makefurrows. Folks come behind and drop in potatoes andeither hoe up a ridge over the row or a horse cultivatorridges it up.We plant our cucumbers, squash, and melons in rowswide enough for a tractor and small disc to go betweenthem before the plants vine. That gives the plants a headstart on the weeds. When we first got here all our neighborslaughed at our mounds and asked us if we learned thatin California. Plants don't get enough water in moundshere. And you really need to know how you,re going tocultivate something before you plant it.
When we were gardening in our back yards, before wegot here, we hadn't really considered planting with tractorsand combining and growing big fields of beans andgrains. Learning mechanics and how the tractors run andhow to plant straight rows and plow and disc ten-acre fieldsexpanded our consciousness, because it took more realattention than we were used to putting out.We put a wholebunch of priority on planting our protein.There was so much rain all through Apriljn this partof the country this year that the planting season was cut inhalf and we had to hustle, along with millions of otherfarmers, We scored an old four-row planter, cherried itout, and when we had our fields ready we put all ourtractors and planters and crew on getting our soybeans in.We planted twenty-five acres our last night. Soybeans arethiscountry's number one cash crop, they're grown all over,and most anywhere you move away from the city there'llbe a half-dozen neighbors who grow soybeans and will tellyou exactly how tb do it.6 7:ti9
A YEAR'S MENUWhat we grow to feed 600 folks:Cool Weather Vegetables4.5 acres peas-English3.01.51.0.5.5peas & Chinese snow peaslrish potatoes, first planting (will raise to 10 acres)onions (to 8 acres)cabbage, broccoli, collardsspinachlettuceWarm Weather Vegetables9.0 acres sweet corn & roasting ears6.0 sweet potatoes (to 10 acresl4.0 winter squash3.0 tomatoes (25,000 plants )5.0 green beans1.5 watermelon (to 4-5 acres)1.0 peppers (bell, cayenne & wax).5 okra.5 eggplant.5 cucumbers-fresh & for pickles.5 summer squash.5 Swiss chard & New Zealand spinach3.0 lrish potatoes, 2nd crop (to 5 acres)lilhot we're really into is making a living in a cleanI guess farming is about the cleanest way to make aIt's iust you and the dirt and God. And the dirt-youmake friends with an acre of ground and get it to givean A like in college or something. If you make fiendsit, you hwe to put work into it, and then it'll comeand feed you, it'll really do it. But you can't snow itanything like that-it's going to be real with you.Fall Vegetables3.0 acres cabbage1.0 lettuce1.0 spinach.5 broccoli.5 collards.75 kohlrabi ,.75 Chinese cabbage.5 beets2.O carrots, turnips, cauliflower, kale, BrusselsproutsSummer Field Grops87 acres soybeans (enough for eating, the dairy, andnext year's seed)15 beans (pintos, blackeyes, navy beans, turtlebeans-could go up to 30 or 40 acres)12 peanuts7 popcorn10 field corn10 buckwheat7 sorghum (and what our neighbors grow)5 pimiento peppers for a cash cropWinter grains100 acres wheat, oats, rye, and barleyEverything else will be cover-cropped"
Ile have a bunch of philosophical assumptions abouthow the world works, and the result of those assumptionsis we decided to go be farmers for a clean way to make aliving-to interact with something that didn't rip off abunch of other fqlks and didn't depend on any social position.That's why we went off to be farmers together. lledidn't start off to be a farm, you know, we started offbeing a church, and then we said, "We want to live together.How can we?" Before we got dpwn to Lewis County wethought we was the space-agest modemest thing there was.And when we got there, there were the Mennonites and theAmish all the way from Lancaster County to Tennessee,who got there ftrst and broke ground for us*for long hairand spiritual groups and things like that. So there's a lot ofstuff people accepted about us from the beginning. Once'they learned that we reolly weren't scary and we reallyweren't violent and we really were truthful, they startedthinking we were Technicolor Amish.Our place in Tennessee is the first home I've had since Ileft my fatherts home. I never had a home since then, andwe made a home, and we have real neighbors. We love ourneighbors. They're good to us.We believe you should love your neighbor as yourself.But when you love your neighbor as yourself, what youdo practically is you find out how to do that. Ifloving himis to stay off his lawn, stay off his lawn and out of his hair,if that's what he wants, because he wants the same thingyou want-a little peace of mind, a place to be. Sometimes,_we do other things for our neighbors. We swap farm equipmentwith them and work with them. We're partners withone of them in a sawmill; we were partners with anotherone in a hundred and forty acres of sorghum. That alsocomes under the heading of loving our neighbors. It's reallygood to interact with them.Whal we've done in Tennessee ,, *.'u. showed them ourminds ain't blown. We're okay. We can still figure out atractor. We keep the toilet paper dry most of the time.We're like a heart transplant, and we ain't been rejected.They've accepted us. The people down here like us, becausewe're truthful and because we work hard and becauseour checks don't bounce, and because we help out. Ifsomebody's having a little trouble getting their hay in ontime because it's going to rain or something, he'll come tous for some men. They hire us to come and build stuff forthem. A local chapter of the Holiness Church who didn'tdig our doctrine had our masons lay the bricks for theirchurch.
We started putting value into some sixty and seventyyear-oldpeople in our county because they knew so much.Man, they've been making it for so long they know how tofix everything, build everything, how everything works.And they found themselves being hung around by all thesestrong young longhair cats trying to learn their thing. Itturned them on, and they said, "Somebody wantS to knowall this old stuff. I didn't know anybody wanted to knowthis old stuff." And we said, "Yeah, man, how do you doit?"We live in peace with our neighbors. We had five or sixChurch of Christ ministers and members of their congregationscome to our Sunday morning meetings and ourMonday night classes, and I heard on Wednesdays they wereplaying the tapes in their church of those meetings. Andfor six weeks we argued all the questions from John 3:16on down, and we went through many changes with them.And it came to a place where they showed us a slide showabout how it was supposed to work, and there was a picturein this slide show of these priests putting the wrong kind offire on the altar, and a big lightning bolt comes down andfries them. And we said, "Hey, man, ain't that a little violent?"And they thought we were funny about that, butwe went through all those arguments and questions, and itcame down to a place where, although we were technicallyheathen as far as they were concerned, we were goodenough neighbors that they could deal with us and wecould be part of their community. And we got to befriends with them, and we argued it out and it got heayy.You know, we hollered and stuff. And there was this orlepreacher there that was a knock-orit preacher-he was theheaviest short-haired preacher I ever saw,. Because everytime it would get really heavy, he'd say, "But we want tokeep talking, don't we? Don't we want to keep talking?Is that what we want? Do we want to keep talking? Don'twe?" I'd say, "Yeah. Yeah. We want to keep talking.Uh-huh." Love that cat, man, love that cat.
Holset are the only livestock we hav€ on the Farm, and we treattre3t the rnonkeys. We feel that horses are intelliyouba stonsd with ttem,,$e frier&, you cfli build aask them to. fnqytcand out of placer wirerturater ard rpteritl to tra.;ultivate ttre vegetable ,i,{H'r,-us by our Ami*r if ive or culti- Itwenty scres jhorses, t"ro t\ing costs us .lequipment l'$r clover in a *{months,. We feedi.plus hay,euddenwltedaily,in mi:,andld,*illrt niglth€ d
;ti.jrtrw--i;;;**#s,. S.',-ss-\$T_ If you live on the Farm you give the Farm everything, because thelorryX go.ing to take care of your needs. lVe liie aciording to theBook of Acts in the Bible, which says: And all that believ-ed weretogether, and had all things in common; and sold their possessionsand goods, and parted them to all men, as every man had ieed. So ifyou have, to go out and do something in town that takes money, yous,top at the bank.and if there's bread the banklady will give you somebread, and you give her the change when you get back.XcgXtffiWiXEffi,{i{,tWe have a gate and a gate man and a gatehouse, and we have it likethat because i['s a home more than a farfr. so it's more of a front doorthan a gate.We have people that meet folks at the gate and take care of theirthing if they're tourists or local Tennesseeans or something. Somebodyescorts them. takes them throueh the whole farm. shows them evervtgso they won't think we're doing weird stuff in the corners thatdon't show them. Parents and folks with business eet delivered toplaces where they're supposed to go.';.The gate ain't to keep the monkeys in. That's one thingabout it is that it swings out really easily. Anybody canwalk out really easily-that's not a hassle at all. Getting inmay be a little harder. But everybody on the Farm camethrough it at one time or another. The man at the gatecame through it once. And so it's a compassionate thing.And the gate man works it out. Sometimes folks cometo the gate who are so weird that he's got to work it outwith them for hours, and sometimes they get mad and goaway and sometimes they shape up right there on the spotitreally happens sometimes. To get through the gateyou've got to work it out with the gate man. The gate manbelieves in telling the truth. It's a yoga-the gate is a yogain itself.oooooooooooooooaoooooooooooooooooaoooooooooIn our travels we've talked to a lot of parents and a lot of children. We've talked to kids who have runaway-from their parents, and we've talked to parents looking for their kids. It seems like certain outfits saythat it's not cool to communicate with your parents. But people change and parents change, and after acouple of years parents who didn't want to look at any longhairs are really hot to hear where their kids are,and they don't care-how much hair they got. So don't hold no grudges on your parents. You ought to writethem a letter, even if it's just a postcard from somewhere you don't even live so they can't trace it, cause theyjust want to know if you're cool. So write them, call them, let them know you're cool.. We have to make choices on the Farn about keeping thething together, because here we haye this agreement andeverybody's thrown all their bread in and,everybody,s beentfuowing all their hard work in it for all these years, and it,sthe agreement, and we want to keep going. So we say we,regoing to do things like not put so much emphasis on whatwe might personally want and pay some attention to what,snecessary to keep the thing going, because that's the boatwe're all riding on.We tell each other where it's at. It,s a good thing to do.It's a good practice. And if it puts you .[tigt t to be toldthat, that's evidence that you're holding.- There,s variousthings that we agree on_like that welre absolute vege_tarians, and everybody on the Farm does that, and nobodysmokes cigarettes, nobody drinks alcohol or wears leatheror eats meat or dairy products. That knd of stuff is likeground rules, but otherwise everybody is just supposed tobe cool-to be on top of it. you're supposed to be neat,and your friends will hassle you if you aini groovy. They,ll. say, "Hey, man, where,s--it at, how comelou,re bein! apain in the ass? Shape up!', And people get Ln each other.But you shouldn't come on heavy experimentally. youcan't say, ,.Well, nothing else works, lei,s try this.,, youshouldn't come on heavy unless you know just exactlywhere it's at, just really where it's at, and if you can see theresults of what you're saying. I,m trying to get people coolall the time, and if somebody comes and be,s a ripoff l,lljust holler and hassle him and say, ..Hey, man, whaiare youdoing? Where's it at?" And come o., to hi. iit , tt "t. O, tdghl.ruv something peaceful and kind to him that mightsnap his cork, but anyhow I'm going to be up in the middleof his thing. And when I come on heavy'to somebody,come on hollering at somebody, I can see them changewo-rd by word-just like when a hammer hits the head of ayil Vou see it go in a chunk every time. I wouldn,t do it ifI didn't have that affirmation every time, because it can bescary to come on heavy like that.It's okay to tell somebody where it's at. What we shoulddo is practice enough loving kindness and brotherhood thatwhen it comes time to tell somebody where it,s at, there,sa strong enough bond of loye that you don,t just alienatehim and kick him out of the boat. We ougtrt to get towhere we can rub against each other hard eniugtr that wecan say something to one another.. Sgme people say that the telling of truth among us onthe Farm feels like a cold water balh, some folks don,t digiI at all, andjust as soon as they figure out what,s going oJr,they split-as fast as they can. But there,s other folks thatare turned on by it. If you be straight with people whenthey're not being straight with you, wlen theyiook in youreyes you both might laugh_because they iemember andknow and they have to cop and they craci "p. I;il;pr*all the time, it's really fun."1--:,.:; .; ".i[Q: How much do ]ou let somebody jump on your ego?l If it's on your ego, grve up gracefully, man,let them have it all' Itls easier that way. The band-aid and hairy{eg technique is what works best: strnip! The volume knob on your telepathy is your morals.In a little while you see that go on commonly enoughthat you develop a sense of humor about it, and you don'tgo into such severe praise-blame every time anybody eversays anything to you, and you don't panic out. And yourealize that it's just, "Move over to the left a little bit.you're touching the white line," or .,There's a red light upthere ahead, which you may not have seen because youhaven't been slowing down yet." Or something like that,instead of, "Oh, my character must be wrong,', or, ..Oh,I'm bad," or, "Oh, he hates me," or, "Oh, nobody lovesme," or, "Why is everybody always picking on me?"We all ought to be very kind and very compassionatewith each other about how we give each other our attention.What you really do with folks is you love the best inthem. You know the best one of them that they can be,and you love that, and every time you see it you dig it.That way everybody can help everybody grow. A bunch offolks that do that get better-looking overall. I say that becauseon the Farm you can really see that happen, becausethe Farm by now is a very powerful field of a way of being,because there's so many people in complete agreementabout doing that thing.most of the time we're really happy doing what we,redoing. But it gets that way because we don,t shrink froma certain amount of hassle. It's as exciting as taking a psy_chedelic once a week to live with about six hundred peoplewho will tell you where you're at every time they get achance. You never know when you're going to have yourliving ego death..-. ,'{q".iS[Q: Can you tell the truth and still be compassionate?]It's not that we don't ever have any hassles. We havethings happen to us that are pretty heavy. Any family thatbig just statistically is going to have some heavy things happennow and then. But we love each other good, and we begood to each other, and it ain't so bad on us, and we kindof weather our stuff through together.If it gets to running weird, we have a meeting and wetalk about everything that's in everybody's head, and it justcleans us out and makes the Farm run smooth. That's thereal secret. If it's clean, it runs smooth, and if you let itget sogge down in a lot of subconscious, people don't getalong with people and it don't run smooth and you can'tmake it. And that's really how we make it. We believe thatthing in the Bible about, "Cast the beam out of your owneye before you try to get the mote in your brother,s.', Itdidn't say you weren't supposed to get the mote in yourbrother's, it said you could try to help him too. And wetry to help each other, and we try to be good-humoredabout it and don't put heavy trips on each other about it.You've got to say what's true, you've got to tell the truthand fear no man. There's always folks that are going towant to shut you down so you won't blow their cover.How we make it ori the Farm is we don't let folks shut usdown when we're trying to blow their cover. It works outthat on the Farm everybody's uncovered. Ain't anybodythere with much cover to blow. We say that we,re like amental nudist colony, and you have to take off your headclothes. We just don't believe in that level of privacy, becausewe'dtather be sane than be highly individualistic.One of my teachings is that when someone points asubtle implication at you you're supposed to rip the topoff it and say, "lUhat's that?" I really think that's an importantthing to do to keep yourself out of trouble. Wedon't let one speck of implications go by. As soon as somebodystarts implying stuff, we'll try and state what theimplication is as clear as we can. And we tell each otherwhere it's at. The result is that most of the time we ger rogroove, most of the time we get to live a really good life,I think the truth is much more compassionate than a lie.But there's a place in there where you have to ask is it kind.is it helpful, and is it necessary-and ifit,s unnecessary andunhelpful and unkind, you can't say it, even ifit's true.If it's necessary, you have to say it, whatever it is.Sometimes I've said stutl' that I just knew was a stone dogfightas soon as I opened my mouth. I was at a peyotemeeting one time, and the vibes were so bad and so weirdand nobody was saying it, and I looked around and sawthat it was an agreementhat these folks had to don't getno stoneder, you know. And I put my back up againsthewall and said, "The vibes in here is weird," and started thisterrible hassle, man, that went on for hours. And I'm soglad. I'd rather hassle forever for truth than live in a lie. This is ihe mostspoiled generation in thehistory of the planet. That's becauseof that entire psychological trip ofthe last twenty or thirty years that says, "Oh, poor baby,you're so determined, you can't help it." And he says,"Yeah,yeah, spoil me some more!" This whole society is in a conditionof overcorrection, like a car that's fishtailing on ice. Ourgrandparents were strict with our parents, and our parentswere loose with us, and we're the sloppy beatniks. And we gotto raise our kids halfway in between where our grandparentsraised our parents and where we were raised. What it looks liketo me is that Freudian psychology and Doctor Spock andgreed and B. F. Skinner and a few details like that taught thiscountry that morality didn't count and that al1 that countedis what you got caught for, and that there was no abstractabsolute morality, so it didn't matter what you did-you couldjust do anything. And you could freak out as much as youwanted to, and it didn't matter. But it does matter. It canget you crazy. One of the things we notice when we'retraveling around the country is that American folks keep theirkids like adolescents where in another society they'd be grownups.There's people their age in other cultures who are makingit on their own and supporting other folks too, whereas adolescencein this country continues on to about thirty.You may be in the habit of thinking that this age is itthat obviously civilization has existed to bring us to this point.But neater civilizations than this one have come and gone.Compared to many ages in the past, we're a bunch of heathens.This is the late Dark Ages religious knowledge in the UnitedStates is just at an amazing standstill, has been for many years,because we've been taught to be materialists. Mankind hasbeen freaking out for five hundred years cutting its ownthroat. But there were times in China where they went athousand years without any wars, and the emperors devotedthemselves to poetry and music and making love because thatwas all they had to do, because there weren't any hassles.And they lasted on for a thousand years like that. We couldbe peaceful, too, if anybody cared to try. There have beentimes when countries that were at war had to quit being atwar because the troops quit doing it and they couldn't makethem do it no more, and they'd get out in the trenches andsay, "Go get 'em! Go get 'em!" And they'd lie there and say,"No, man, I don't want to do it." And both sides wouldquit. That's happened. There's historical records of thatstuff going on. If the people don't want to have a war, theydon't have to have one.The problem is that as a culture we're uncompassionatewith ourselves, and we give some of us a hard time and letsome of us get very fancyand rich. Then after that it's whatto do about that. There's the pie-in-theskyschool, which says,"Don't do nothing, you'Ilget it.later'." Politics says, "Take it now, man, when you'rea1ive." And tlie spiritual way says there is a moral imperative,in that you must not take life, and that you got toobserve that the seven deadly sins are really deadly, which islike anger and fear and lust and stuff like that. But we don'tknow much in this country about a spiritual way not really.When the Constitution said Congresshall make no law respec^ting an establishment of religion, they thought that was goingto give us religious freedom. But it didn't. What it did was itmade religion unimportant and defined it as unimportatrt andsaid that the important stuff is covered in the Constitution.Well, you can't take a people's religion away from them. Whathappens is they'll grab whatever's next. And so now the saintsof our religion are Washington arrti Jefferson and Lincoln andKennedy, and instead of a cross we got a flag. And the religionhas become the state, and nationalism is the religicn of theUnited States, and nationalisrn is a materialisl.ic religion, and amaterialistic reiigion is what you have to call dark arts.The thing is, somewhere back in there the Church got socorrupt and so riddled with priesthood and weird dogmas thatback two hundred years ago it got to tvhere mankind couldn'thack it no more and said, "Nuts to all tliat shit, man, let'sdon't be superstitious, let's be really real." And so they weregoing to be scientific, and there was tl'ris idea of the scientificmethod coming in-the morning of the scientific age and stufflike that. And in a way I knew those cats were too conservativethat they threw out important stuff along with some ofthe superstition but I didn't know what it was or what itmight be for years and years, because I never had any experiencewith it. But now I know what it was they cut loose ofthey cut loose of the life force, they cut loose of the energy,they pulled the plug.So our past karma up to date ain't working too good as faras civilization goes. And the parts that are working good arethe parts that are the most divorced from the technologicalthing. The farms and the places like that are doing it. Backwardcountries are way ahead of us. They talk about theTrobriand Islanders being a backward culture because theydon't have any machines and stuff. But, you know, somehowor another they got it figured out where they're managing tomake it without all those smelly machines. And we should betrying to figure out how they did it, instead of trying to convertthem to our thing. Americans are more hardnosed aboutconverting people to their standards of living than Christiansever were.Open up your head and let this stuff flow in. Let it zonk rieht into your subco The reason that our technology is overrunning us so badis because we build sci much junk that we don't need. Ifour technolory was cut down to the minimum that it takesfor us all to survive well. it would knock out most of ourpollution and smog and crap problems almost immediately.dlso if you didn't have artificial centers. A city is an artificialcenter. I think cities are psychically unhealthy, and Ithink a great deal ofthe dope-taking in this country is frombeing dumbed out by cities until your brain cries for intelligencejust like your body cries for protein. The thing aboutcities is this: What really makes them a hassle is lots and lotsof folks being there because they want to be in the cityscene and they'Il take any kind of a job, no matter how sillyand meaningless it is, to support themselves at the inflatedstandard of living it takes to live in the city. And if the.folkswho were doing that would just split to the country andtake care of themselves, the folks who were in the city doingstuff that was necessary for mankind could just do it, aqdthe cities wouldn't have to be such crowded garbage holes asmany ofthem are. It could even be a groovejo live there.Ii{8t'I^thi{_< the economy is on a giant speed trip, and it's anartificial level, and we cannot maintain it. The countrv'stechnology is overblown because it builds stuff to deciy.Weltr, planned obsolescence is outright sin, as far as I cansee. Most businesseshould fold on account of they'reworthless. Hair spray factories shouldall fold, tactories thatmake junk jewelry should fold, factories that make all kindsof useless crap should fold. We don't need an automobilee_ach to get us around, and it's a terrible waste of energy todo it that way. We don't need to cut down thousands andthousands of trees to print thousands and thousands ofnewspapers full of bummers. Meat processing plants are unnecessary,and the dairy industry is lnnecessary. If a largepercentage of the people were out iri .the country feedingthemselves, it wouldn't be such a hassle to feed the peoplethat we do need in the cities to produce a few tractors andsome real stuff. I'm not saying that you can't get it on outsideof the farm. I'm saying that if you're an honest cat,you ought to get it on at the tractor factory and you oughtto say, "Look at that big mother pull, it's going to feed alot of people." And you ought to be able to dig it.3oyou need this information._j;It's going to help you get it together.AMERICANS ARE THE GRNEDIEST PEOPLE IN TIIEWORLD. SIX PER CENT OF THE WORLD'S POPTJLA-TION USES THIRTY.TWO PER CENT OF THE WORLD:SNATURAL RESOURCES. THAT'S GREED. Now being religious and spiritual these days haslot of juice in it, because a lot of people's bottommost desires is for it to get to be a real show andbeing a plastic one. They'd like for this movie toa good movie. Wouldn't you like that? There's a lotof folks digging that, wanting it to be that way.there's a lot of juice in religion right now on accountof that. The one prophecy that I'm willing to reallystand up and cop to is to say that there's a giantspiritual renaissance coming down on this country,and a giant financial depression, and they go hand inhand, because as folks lose their tail, they're going tohave to cop to God.rAs near as I can tell from the viewpoint that I have comeinto. the overall consciousness of mankind is at fault for theevils of any given age. And mankind really needs to becomecompassionate if he's going to do it at all on thisplanet. Some people on the planet don't have enough ofanything, and some are mistreated in extravagant ways. I'mtrying to talk to the overall consciousness of mankind. l'msaying that if you would like for there to be enough to goaround, there is a way where it can be shared out where itwill go around, and it will stretch, and we can eliminate miseryand poverty. Competition between nations and hasslesover bread and big international money trips and wars andall that is all optional. We don't have to live that way. Inthe sense of saving the planet, the trouble is not that thereis not enough capital in the world to go around-there isenough to go around. The world is filthy rich. If youwant to measure capital in terms of iron, the planet is aboutninety per cent iron. It's not running out of aluminum, it'ssome huge percentage of the eatth's crust, and theycouldn't dig it up in milleniums. The real thing is that folksthrough lack of compassion don't be fair with the goods.That's really the rock bottom one, isn't it, that folksthrough lack of compassion don't be fair with the goods.And a political situation does not change your level of compassion.People cannot be legislated into being cool, theycannot be gun-pointed into being cool, they cannot be conditionedinto being cool. Politics is not the way to changepeople; Spirit is the only way that will change people.wFt',.:.tIt's not too complicated to assume that yourphilosophy, your religion, your science, your psychologyand your law should all be identical.They're all describing the same uniYerse. They allought to come out of one rule, and they don't inthis country. We have a religicin that tells us youbetter not be a materialist or it's going to hang youup, and the whole rest of the system is temptingyou to be a materialist. So when I say religion, Idon't necessarily mean Catholicism or Judaism orsomething, but I mean a philosophy and a worldview that covers you all the way through. And ifthe system that you're working under now doesn'tcover you, you've been bumed-because there'ssystems that cover You.d g|pq is country needs in great numbers to become voluntary peasants.That's a lot of what the hassle's about in the government.They're just scratching and fighting because thebread's getting so funny. Didn't you think it was funny?It's within the last three years, I believe, that the price ofgold has gone from about thirty-seven or thirty-eight dollarsan ounce to one hundred and fortv dollars an ounceor anything like that. It means that if all of us were in perfectagreement it could be heavenly zow. We have the freewill to try hard and to be cool-we have free will and wehave the power to'make agreements, and we can agree onwhat's going to happen. This generation, right now, ocrossthis country, can agree on what's going to happen, and itwill happen that way. That's how it's beeh so far. When itcame to where there wasn't enough agreemento supportVietnam, Vietnam stopped, because there wasn't enoughagreemento support it anymore.Some of what we're doing is trying to wake folks up.And we say, "Look, the flying saucer people are not goingto come and pick up your mess, you dig that? There ain'tnobody going to pick it up but you, and if you don't pickit up it ain't going to get picked up." And we can haveanother generation of wasted time on the planet, but someof these times we've got to get it together, you know, andwe could do it now.That's$'hat I go around the country with ihe band for:to try to talk to lots and lots of people, and try to tellthem that kind of stuff, because t feil Uke the time wewent around'on the Caravan we made a difference. I thinkthat we helped with the violence vrhen we went around thecountry that time. And it says on the front of our bus:when it only went from eighteen to thirty-five since 1840or something. Wow! Didn't anybody notice that? TheAmerican dollar has never been devalued on the foreignmarket previous to this year, and it's been devalued two orthree times this past year, and money on the world marketis no longer counted in terms of the American dollar.Money on the world market is counted in terms of Germanmarks and Swiss francs. which are more stable currenciesthan the American dollar. Wow, rnan, are you paying attention?I think it's far out to watch the greenback crumble.And I'll tell you the folks who don't care. The Amish, foiinstance, don't care. They didn't care in the last depression.It didn't make any difference to them. And the folks livingout on farms don't care about that stuff.Some folks go around saying, "The Kingdom is at hand,you'd better shape up." But the Kingdom is at handdoesn't mean it's going to happen in a minute or next yearThat phrase is chosen from the old thing, *Well, I ain.t outto save the world, but . . . " We are. Out front. I don'tknow anything else to do that seems worthwhile. I canalready feed myself. I already w"r a .oileg. professor. Notmuch fun althis. Want to help?
.@@@V*LAND-- t Hlpl - eople say,"How do youA make it?"We say,"Godsupports us." And God supportsus by keeping us high enough thatit don't bum us to work. We feelthat work is the material expressionof love, and that love is not anabstract idea or something for abumper sticker, but that if youreally do love somebody you couldfind it in your heart to get off yourtail for them.
A very interesting thing went down when we raised the watel tower. I wasstanding on the ladder plying the hook in with the wire, and I turnedaround and tord the crane operator, "Give me a little. Give me aboutthis much." And he says,..Huh?" And I says,.,Give me about thismuch." And he says,,.you want to go"Yeah, up a little bit?,, And I says,up." And he says,.,Okai.,, And he not only asked meto repeat it twice, he turned ii around and fed it back so thatI'd have to say, ..yes, that's the right one.,, And I reallyloved him. I really appreciated him, because I was on,.:rthis ladder leaning against it. Well, that,s the way rtught to be.\.That,s why that dude gets to operatethat crane.\,He operates that crane because he.\ doesn't squash anybody. .s.Qze"teqni{We don't quit being spiitual to go do our mnterial plane.lle think being spiitual at the motor pool, for instance, isbeing sure that the car is well blocked up so it ain't going tofall on anybody, so nobocly has to have their head hung upin it and everybody around can be as high as possible.
We've been building on the Farm for over two yearsnow and have gone through a number of changes,from underbuilt to overbuilt, from pioneer to flowerchild, and we've found that you get the most for yourenergy by using local materials and methods. We'vebuilt a sorghum mill, a laundromat, a motor poolbuilding, a bathhouse, a print shop, a canning andfreezing facility, a six-unit apartment building, and tenhouses; and we have a community meeting hall andkitchen, a flour mill, and a number of other housesunder construction.It takes a lot of energy to build a house, so you needto have a clear idea of its structure before you begin.Plans on paper help a lot. lt's important not to getoverextended but to build simply at first. You can alwaysadd on and get fancy later. A fully enclosed foundationmakes the warmest house, but our climate isfairly mild, so we mostly use piers. You can fill in betweenthe piers later and still build a sturdy house.Look at old neighboring farm houses for ideas aboutwhat you might need in the way of a foundation. Youcan build anything if your foundations are secure.In all our projects it's important to keep the crewtogether and stoned. Having the group head know allthat's going on makes for smarter construction. Strawbossesare responsible for getting materials and tools together,keeping the flow going, and sorting the head.We try to keep the whole construction crew head togetherwith meetings twice a week. We work for agreementabout what we need, how to finance it, and howto build it. With the agreement, we can do it.- Robert and Ronald,for the Construction Crewr tf'ry**lff?*'Ui:;::-:ta':'rrqt.*," S1-3-",q,;&''rt&.r:
Homer was going to run us out of the county the firsttime he saw us, because the last people he'd seen with longhair did stuff like peel out of the gas stations without payingfor the gas and had some big-city smack and orgy scene,so he didn't have too much use for hair. But after he got toknow us we were doing stuff like being partners with him inhis sawmill and sending over a crew to help him maintainhis farm. And a bunch of people that were shiftless, mostof them being English majors and kilo dealers and otherworthless types that hadn't never worked, learned how torun tractors and sawmills and learned how to farm.L'g$lq;&*L-ees* IS t.\g,,{Jiryr,$rNMffiffiffiWOur first outhouses were simple open-front structuresframed with poles and covered with oak slab and tin. Themistake we made with nearly all of these was that we dugthe holes too deep and got into the water table. This keepsour holes brimfull most of the time. To solve this problemand to take us one step closer to methane gas production,we're producing ferro-cement tanks that can be made fairlycheaply-$40 each for a 4'x4'x4' model. (Ferro-cement ismade by putting four layers of 1" mesh chicken wire andone layer of hog wire into a mat and then pressing cementinto this, making a wall 5/8" to 3/4" thick.) These tanksonly weigh six hundred pounds, so they're light enoughfor us to carry to the site of the outhouse and lower intothe ground. When they become full we can pump them outand carry the contents either to a nearby town for treatmentor to a central methane bio-gas digester. which wehope to begin work on as soon as we get enough of theseholding tanks together.The basrc consideration for: building an outhouse is thatit be sanitary. There's a direct relationship between thenumber of flies around here and our efforts to keep themfrom breeding in our outhouses. A little hydrated limesprinkled in every time you shit keeps things covered. Wefind that if we space out on this even once, more flies arebred. Some kerosene poured in the hole occasionally willcalm things out if flies are breeding. lt's also good to scrubthe seats with disinfectant regularly.The house we're putting over each tank is 2 x 4-tramed,with regular house siding. lt's a fly-proof, weather-tightstructure with doors and a divider down the middle. Parentsand visitors appreciate privacy, so we're putting thiskind on our main roads and in the busiest spots. We'vebeen working out our cultural shit-shock as we go, but wecan't expect everybody to be equally down home about it-Roger & the Outhouse Crew ffiM########'t care who you are. You don't have to I t | | o"rho you are. I ain't concerned about where I|lr&*aLmorwho I am or anlr of them. I just try notBuddha says, "Avoid error." I always tfr1t.- ffi$e ffied about that stuff. I got this other trip hap- T lt's very to the point. Avoid error; don't make Tr going on and all that.=^&* any mistakes. One thing that's common to "il ..,Sgrrs describe the ego as if there's this postal,ffi" - errors is not paying attention. I think you canhead and he's going to send out the mail, andT pretty much assfun error to not paying attention. ry.. ..I$n these letters, and every once in a while he gets Y!&. I don't like to make mistakes,I really don't like it.$W ano writes "Kilroy ---- --was here" on the letters as t*E:" r-r':-:'-'::::::' ;':::";' :::,': }ffir nfiPiling out. That's your ego calling attention to fiFFs But when I do' as soon as I realize I made a ' \iere'I am!" Doing that number. Well, you grad-{ - mistake, I just drop all that stuff and go on to the - ^,flfl::1ffi,1"1fiX;l':"JT.*il'.tt.l#;Wnext project at hand and start woftins on that oneas wholeheartedly as I possibly can' ry^r,mt#*+,:,ffi;ifi{"il;{'*cause and effect and trial and error, you quit# ry # I I # # #taking responsibility.may not think what you think counts,you think is determining where you'reif you ain't making it, it's because ofyou think. That's what being spiritualthat you can change your headwill change your life and it will change the. And it's much easier to change yourthan it is to shove the Universe aroundpsychology goes along with the inofneurosis-which is to say the deificaofego. It teaches that you're so determined'stuff that you can't do anything really. Butnot determined that much if you have afree will and you're paying attention. Youto know what's happening more;you get iomofe.I believe we have as much free will as we can-as we have nerve for. However much restyyou take is however much free willcan have, and if you don't take any responsibilities,'re determined. You're an effect in the Universe, andain't a cause. You're just like a leaf blowing in the. You don't count, in essence. If everything you dobe algebraically cancelled out to adding up to nothing,you do everything random and you don't have anytion in your life, then your life added up to nothing.ewhere you have to have a direction and do somethingt you knorv is right, so you can put your thing behind it.man gets tired of going around feeling like d canaryd on a branch that's liable to break, and can't put hising down, and can't come on behind his thing and do it.'ou have to have something you believe in to come on andit. I see folks that have something like that put out andreally do heavy stuff.Real morals is when you take care ofthe energy when nobody's looking.. :P,lt,:,,There's a place in the Book of Proverbs in the Bible'thatsays, "There is no need to fear the sudden fear." And itmeans, iT you have knowledge of good karma, if you knowwhere you've been foi a while, and you know you haven'tbeen wrong, then relax-ain't anything going to happenthat's a surprise to scare you-you know, the universe isjust. It ain't mysterious to me, it all works according to itslaws. The weirdest thing I ever saw did not break the lawsof karma.The universe that you"perceive is apcording to thesubtlety, of the instrumends you'perceive it with. If youhave a sloppy head when you perceive the universe you geta sloppy universe back. 'Biit if you get yourself togetheryou can perceive a clean.univqrse that works by clean laws.And you don't ever have ,!o be afraid again, or ever doubtthe reality of the universe-the reality that it's a fair shake,and that you can make.if,according to the old-fashionedground rules. 'I know why"I'm alive;, I know why I'm here, I knowwhat I'm doing, I know whai I'm doing it for, and I ain'tafraid of anything. And.2tou".9an be that way, too.
-@S\tephen teaches that it's being compassionatewith our fellow man to be vegetarians and nots!,eat more than our share, and it's being compassionate withour fellow animals to not eat them. He says:I feel like it ain't a question so much of whether meat isgood for you or not as it is that I wdnt to be as harmlessand as little of a hassle to the Universe and to mankind as Ipossibly can, so as to not make my support a burden on tlrcUniverse in any way. I really dig the (Jniverse and I reallydig the tip, and I don't want to put anything on it at all.So I decided to support myself as far down the scale of livingbeings as I possibly could and still sunive healthy:make a whole quantum jump and say, no animals, justplants.It's so grossly uneconomical and energy-expensive to runsoybeans through a cow and then eat the cow instead ofjust eating the soybeans that it's virtually ciminal.lUe're absolute vegetarians for several reasons-one ofthem being that I'm as telepathic with animals as I amwith people and it's weird to eat them.Our vegetarian diet is simple. We eat just about everythingexcept animals and their extensions: meat, fowl, fish,eggs, milk, and honey; and stimulants such as coffee, blacktea, and ginseng. We're vegetarians for religious reasons,not because we're paranoid about our health. Our diet hasno taboos on any food of plant origin (except stimulants,because the world needs to slow down, not speed up).Being complete vegetarians, we don't drink cow,s milkor eat any dairy foods. Here's how Stephen has answeredthe milk question:Many people don't know that you must get a cow pregnantevery year in order for her to continue to give milkthatthey don't give milk spontaneously. You have to getthem pregnant every year, and they have calves. Half ofthem, the females, you can add to the milk herd, but themales are used for veal cutlets because they don't usuallyraise bulls. They buy fancy bulls for breeding purposes.And when milk cows get old, they don't retire them orbury them in peaceful graves. They grind them into gristlyold sausage. It's all part of the same racket in every way.So we leamed how to make soy milk, and we make eightygallons of soy milk a day !|ve days a week for about $20.00worth of soybeans a day. And the soy milk is comparablein protein to cow's milk.We eat sugar rather than honey as our staple sweetener,because sugar comes from a plant. Here's what Stephenhas said about honey:I think honey is a fine food, but I don't dig having tomess with bees at that level, because that puts me back inthe animal business again, and the only difference betweena bee farmer and a cattle farmer is that you have smnlleranimals with six legs instead of four legs. If you leave beesalone and let them do their thing in the hive, when thequem becomes mature she'll leave the hive and lead themback into the woods. So you have to take their queen.They don't dig you to take their queen and they don't digyou to take their honey. Commercial honey farmers givethem white sugar to live on in the winter and sell theirhoney. I'd rather eat the white sugar. When you get intothat level of it, you're running a bunch of life force aroundthat you're taking responsibility for, and doing things withthat life force, and I don't want to put life force underbondage of any kind.Contrary to the opinions of many other beatniks andhealth food stores, we eat white sugar. If eaten wisely,sugar is a clean-burning fuel that causes no harm. There'san emotional rumor out that says sugar"destroys" B vitanrins.Thiamin (a B vitamin) acts as a catalyst in the metabolismof carbohydrates (sugar and starch). That's its gig.If you eat wheat germ, brown rice, nutritional yeast, andenriched or whole wheat flours, you'll have plenty ofthiamin to metabolize your sugar.We also eat some enriched white flour as well as wholewheat flour because sugar and white flour are helpful inmaintaining a high enough calorie intake. A vegetarianneeds the "protein-sparing effect" of plenty of carbohydrates.If you eat enough carbohydrates, your body willnot dip into your day's intake of protein or its own storeof tissue protein for fuel. We eat enriched white flour aswell as whole wheat because it's easier on the stomach anddigestion of many people, especially children and olderpeople. The cellulose of wheat bran doesn't break downeasily and is scratchy on the innards (it acts as a laxative bycausing the lining of your tube to secrete more mucus).Enriched white flour. has 80% of the protein of whole wheatflour, but it's not so heavy and it's easier to eat more of it.We use enriched white flour rather than unbleached whiteflour because it has added B vitamins and is a major sourceof these vitamins. .If you mill your own unbleached whiteflour, it's a good idea to add a standard bakery vitaminmix to it. The germ you mill out of it makes a high-proteinconcentrate breakfast food as well as a source of B vitamins.There are vegetarian diets that are more complicated andmore restricting than ours, but most of them are nothealthy or practical for large numbers of people. We don,tcop to the macrobiotic or fruitarian diets because they,reinadequate nutritionally and will make you sick and weak.Macrobiotics doesn't provide hardly any protein except fora carp (large goldfish) now and then, which is very yang andnot vegetarian. Macrobiotics is into yin and yang qualitiesof foods, but they're biased toward the yang side with fish,burdock root and tobacco, and down on the yin foodssuch as citrus fruits, tomatoes, sugar and bananas. This isnot a healthy attitude. The fruitarian diet is at the otherextreme. It requires that you eat mostly fruit. you caneat some nuts, but you aren't supposed to cook anything,so that leaves out soybeans, other legumes, rice, wheat and other grains. This diet appeals to people who are freakedout by mucus. It advertises to keep you free of the slipperystuff. But you need mucus to lubricate the delicatemachinery of your body. It keeps you from squeaking,lets your food slip aiong the digestive tract, and keepsyour nose moist. The macrobiotic and fruitarian diets cancause kwashiorkor, the protern-deficiency disease. Anotherschool of nutrition that beatniks often follow insists orr thebeneficial effects of fasting, purgatives, high enemas, eliminatingdiets, laxative herbs and diuretic teas. They talk of"poisons in your body" and putrefaction of the innards;and they badmouthvinegar, sugar, baking powder and tablesalt. This diet and the fruitarian diet are based on inaccuratenotions of how the body works. They assume thatyou're always full of some kind of gunk you need to getrid of, and the longer it stays in there the more it poisonsyou. Your monkey is not that inefficient. Your food iscarried through a long clean pink tube that mostly takescare of itself, is tough enough to handle most anything inany combinations, and knows how to digesr your food andprocess the leftovers better than you do.There has been much misinformation and superstitionabout food and particularly the vegetarian diet. So to avoidold rumors and unhealthy vegetarian variations, we're includingthe following basic information on nutrition, withspecial emphasis on protein, calcium, and vitamin Bl2.VITAMINS AND MINERALSVitamins are organic substances that interact with enzymesmanufactuied by the body from protein. They'recatalysts in the chemical reactions of the body, from digestionand metabolism to transmitting nerve impulses.The fat-soluble vitamins-A, D, E, and K-are found associatedwith the oils of plants and are absorbed along withthose oils. They can be absorbed in large amounts and arestored in fat. The water-solutrle vitamins-C, bioflavinoids,and all of the B vitamins-are stored in small amounts in thetissues, enough for a few weeks (except for 812, which canbe stored in the liver for long periods of time). Watersolublevitamins will usually go into the cooking water ofvegetables, so that water should be used in some way. Becausefat-soluble vitamins can be stored in the body inlarge amounts, it's possible to get too much (except for E).This is not likely if your vitamins come from your dailyfood, but some vitamin supplements have 25,000 l.U. ofvitamin A and 800 or more I.U. of vitamin D. If youtakevitamin supplements, get the kind with 4,000-5,000 I.U.of vitamin A. and 2OO-4OO LU. of vitamin D.Minerals are little bits of inorganic metal or rock that weneed to build the mineral portions of oirr body, and tohelp form certain organic compounds such as hemoglobin(of blood) and insulin. The electrolyte minerals-sodium(salt), potassium, and chloride-regulate the water balanceof the body.We give all of our pregnant ladies and nursing mothers aprenatal vitamin and mineral supplement. To build a babyand to nourish that baby increases a lady's need for all vitaminsand minerals, and we want to make sure she arrd herbaby have everything they need. Our pregnant ladies alsotake three iron pills a day (ferrous sulfate, 5 grains), oneafter each meal. Nursing mothers and ladies in thetr iasthalf of pregnancy take two 7\rL-gran tablets of dicalciumphosphate (l gram). Dicalcium phosphate is an easily absorbedform of calcium.We give all of our babies vitamin drops containing vitaminsA, D, and C, and iron. They get the drops from sixweeks old to eighteen months, or whonever they,re reliablyinto enough beans for their iron. FAT-SOLUBLE VTTAMINSVitamin A - Carotene (provitamin z4i is the yellow pigment in carrots,sweet potatoes, squash, and other yellow vegetables, and is synthesizedfrom the sun in dark leafy greens. Vitamin A itselfoccurs in animal foods'but carotene occun in vegetables and is converted in the body to vitaminA. It's necessary to good vision in the regenemtion of visual purple,for gowth in chitdren, and for healthy skin and hair. It's fat-soluble andcan be stored in the body, Foods rich in carotene are: carrots, pumpkins'spinach, coltards, charo, sweet potatoes, winter squadr, tumip greens, kale,mustard greens, cantaloupe, and apricots. Carotene is stable to heat' andcooking the vegetables helps the body to utilize it.Vitamin D3 - Catciferot (provitamin D/ is synthesized in the skin fromthe ultraviolet'rays in sunshine. It's also availabte as ergocalciferol ininadiated yeast and certain other irradiated plants and plant oils' VitaminD itself is in fish oils primarily, but calcif€rol s€rves the same functionin the body. The requirement for vitamin D can be met entirely byskin irradiation. but in areas where there's little sunshine or very shortsummers, it tnay b€ necessary to supplement it, especially for children'Vitamin D is necesary for the growth and health of bones and teeth,and it hetpo in the absorption and retention of calcium and phosphorus'A lack ofit can cause rickets in children. It's fat-soluble and can be storedin the body from the summer for the winter, and it's stable to heat'Vitamin E - Tocopherol is an antioxidant that controls the oxidizingof fatty acids in the body. It's also helpful to the circulatory systemand the heart. Vitamin E is abundant in vegetable oils, margarine, oilseeds such as soybeans, and thc germ of whole grains' lt's fat'solubleand stable to cooking, and is stored in the body.Vitamin K - Merudione helps form prothrombin, which clots the blood'Deficiency of this vitamin is not likely since it's synthesized abundantlyby bacteiial flora in the intestines, and it's also available in green leafyvegetables. lt's fat-soluble and stable to heat.WATER.SOLUBLE VITAMINSVitamin C- Ascorbic ocrd maintains healthy teeth and gums and strongcapillary watls, and helps in the absorption of iron. It's also imPortantto the resistance of disease. A deficiency causes general poor health andscurvy. It's unstable to heat, light, air, water, and storage, but morestable in an acid medium and cool temP€ratures' Vitamin C must bereplenished often, because it's very water-soluble, and excesses are peedaway. It has a half life in man of about 16 days' Foods rich in ascorbicacid are: citrus fruits, berries, tornatoes, green PePpers, melons, dark leafygreens, bean sprouts, Brussels sprouts' broccoli, cauliflower, strawberries,pot"to"r, and ihe needles of conifers (in tea)' lt's best to eat these foodsuncooked and quite fresh, but if you do cook them use a small amountof water in a covered pot or steam them.Vitamin P - Bioflninoids goes along with vitamin C because it's im'portant to resistance and is found in many of the same foods' It reducesihe fraglity of capiilaries and regulates their.permeability' It's water'soluble-and foundln citrus fruits, rose hips, black currants, cherries andthe needles of conifers (in tea).Vitamin Bl - Thiamin is a coenzyme in carbohydrate metabolism andis necessary for normal growth. lt also prevents beriberi' lt's watersolubleand fairly stable to heat in an acid medium, but unstable in analkaline medium. The body can store excesses for several weeks' Thiaminoccurs in small amounts in most foods, but its primary sources are thegerm and bran of whole grains, nutritional yeast, nuts, peanuts' and enrichedcereal Products.Vitamin 92 - Riboltavin is a part of many enzyme systems' It's involvedin tissue respiration (hydrogen transfer), metabolism, and the oxidizing ofcertain fatty acids. Alack of riboflavin can cause cracks in the comers ofthe mouth,. MINERAI.sIron is needed mostty to form hemoglobin, the protein molecule of thered blood cell that carries oxygen to all parts of the body. So the mainfunction of iron is in cell respfuation. The body only absorbs as much ironas it needs to replace any losses. Ordinarily it absorbs very little of theingested iron (about l@) because it recycles almost all of the iron that isabsorbed. Iron absorption is increased when the need for iron is increasedin anemie. Iron-deficiency anemia is rare in adults, except in pregnantIadies who are forming the hemoglobin of their babies, and in ladies whohave heavy menstrual periods. These ladies should supplement iron. Mosthdies can replace their monthly iron losses through their diet. In men,and in ladies after menopause, there is little iron lost and therefore littleiron needed. Iron is stored in the livet, spleen and bone marrow. The bestsources ate: dried beans, leafy greens, rice, wheat, sesame seeds, sunflowerseeds, oatmeal, nuts, dried fruit, molasses, and iron cooking pots.Calcium is one of the main minerals forming the bones and teeth. About997o of the body's calcium is in these bony structures. The other 1% is inthe blood plasma and tissues, and helps determine blood coagulation,muscle contraction, heart function, and the permeability of membranes.Vitamin D is required for efficient absorption of calcium. The best sourcesof calcium are: sesame seeds, collards and other leafy greens, almonds,soybeans and other beans, nuts, sunflower seeds, orange peel and citrusfruits, broccoli, okra, wheat germ, peanuts, dried fruits, snap beans, wheat,Brusels sprouts, and summer squash (in that order). But the real questionabout calcium is the requirements.Calcium requirementsWhen people consider b€ing total vegetarians, using no milk or dairyfoods, one of the most frequen[ questions is about calcium requirements.The Food and Nutrition Board of the National Research Council has a veryhigh recommended allowance of 800 mg. per day. It's difficulr to consumethis much calcium without drinking a lot of milk. This allowance isbelieved by nuny to be too high. The human monkey is very adaptive inrcgard to calcium equilibrium. There is a constant turnover ofcalcium inthe body, from bone to blood plasma, frorn diet and through excretion.The body does not want too much calcium, and when intakes are quitehigh, output is also quite high. Many people that drink a lot of milkFe more calcium than non-milk drinkers take to maintain calciumequilibrium. Sre don't supplement calcium except for nursing and pregnantladies, because onoe your body has adapted to lower levels of calcium,there is plenty of it for your needs in the vegetable kingdom.Since this b . controversial rnatter, I don't want to just leave you with asketchy opinioo, m I would like to quote for you from some recentstudies and thc L.lnited Nations World Health Organization report oncalcium.These are excerpts from an article written in May 1972 to determine ifcereal should bc spplemented with calcium in South Africa:Adults who have grown accustomed over a long period of time to acalcium intake greatly in excess of their true needs may no longer abrcrbenough calcium to keep themselves in equilibrium when their intake issuddenly reduced under the conditions of a short term experiment. , , .These recommendations (Food and Nutrition Board, 800 m9.! were basedpartly on the results of experiments conducted on individuals accustornedto a good Western diet (rich in calciuml and partly on informed guesswork.In many parts of Africa and Asia children develop healthy bones andadults remain in calcium balance despite much lower calcium intakes. TheWHO/FAO {1962} Committee on Calcium Requirementsuggested that apractical allowance for adults should be between 400 and 5(X) m9. per day.Human Nutition and Dietetics. 4th ed. Sir StanleyDavidson,1969.Here is some more from the World Health Organization's report oncalcium requirements:It was thought that the question of calcium requirement deserved earlyattention, particularly because of considerable uncertainty and conflictingviews on this matter. On the one hand, more people fail to get thecurrently recommended allowances of calcium than of any other nutrient,while on the other hand, it is recognized that there is little convincingevidence of specific disabilities attributable to dietary calcium deficiency.Most apparently healthy people-children and adults-throughout theworld develop and live satisfactorily on a dietary intake of calcium whichlies between 300 m9. and 1,0fi) mg. a day. There is so far no convincingevidence that, in the absence of nutritional disorders and especially whenthe yitamin D status is adequate, an intake of calcium even below 3(X) mg.or above 1,000 mg, is harmful.Among Scrth African Bantu, the general range of intake [of calcium]being 175 to 475 69. per day . . .Bone compositbtt Inyestigations on bones from Indians, Bantu, andUgandans, compared with Caucasians, have revealed no clear-cut differencesin mean chemical composition (total mineral matter, calcium, phosphorus).The bone studied cfriefly were rib, femur, and tibia. A low calcium intake,therefore, does not preiudice bone composition.Dental caies. There is adeguate evidence that possession of good teeth byunderprivileged populations is compatible with an habitually low intakeof calcium.Rickets. .. . lt is g€nerally accepted that rickets is due almost whollv tolow vitamin D status. In a review published in 1956, the author concludedthat there was no specific evidence that a low calcium intake per se pro_motes or causes the (rccrrrenoe of rickets.Summary. In South Africa, enrichment of staple cereals is under consideration,In view of the known low calcium intake of three-quarters or moreof the total population (300 mg. a day), a decision is required on whatpriority, if any, should be given to calcium supplementation. An examina_tion has therefore been made of bodily processes and disease conditionslikely to be prejudiced by a low calcium intake. . . . The conctusion isreached that there is no unequivocal evidence that an habitually low intakeot calcium ir deleterious to man, or that an increase in calcium intakewould rerult in clinically detectable benefits."The Human Requirement of Calcium: Should LowIntakes be Supplemented?" Alexander Walker. TheAmeican Joumal of Clinical Nutition. May, 1972.A current textbook on nutrition has this to say about calcium:Nicliolls and Nimalasuriya (1939) showed that growing Ceytonese childrenoften maintain a positive calcium balance on intakes of about 200 mg.of calcium a day. Their observations have since been amply supported bVobrervations made on citizens of such diverse places as Johannesburg(Walker and Arvidsson, 1954), Mysore (Murthy, lg55), and peru (Hegsted,lS2l. Bantus, receiving no more than 3OO mg. of calcium a dav, have anormaleyelof calcium in the blood, and, more important, normat amountsin dpir bonee.It has been established beyond doubt that the development of ricketsand dental caries is largely independent of calcium intake.Current knowledge does not permit any definite view of the relativemerits of maintaining calcium equilibrium with relatively high or relativelylow intakes, as it may influence the health of a population. The importantpoint is that populations with habitually low intakes achieve equilibrium atlower levels than has previously been supposed and that populations accustomedto relatively high intakes can achieve equilibrium at lower intakes.It appears that no frank ill e{fects attributable to calcium deficienryhave so far been reported in children receiving an habitually low calciumdiet. . . , Broad experience of South African Bantu children also supportsthe view that a calcium intake only a little above 200 mg. in the diet issufficiento prevent obvious calcium deficiency.Calcium Requirements.' Report of an FAOAVHO ExpenGrouo. 1962. Phosphorus is in every cell of the body, but most of it is in the bonesand teeth, along with calcium. It's also an important part of thc gcpes. It'sused in the metabolism of carbohydrates, fats and proteins, and is arfor lhe acid/base balance.of the body. Phosphorus is in all naturalit's in all organic rnatter. Although it's essential, a defici8ncy isknown in mirn, and is unlikely unless one eats only refinednatural foods. A balanced diet will have plenty ofgrhosphorusi,sources are: dried beans, whole grains, nuts, especid,lly peanuts,dried fruit. \Magnesium is the third major constituent ofan important part of the solt tissue, any' activatesA deficiency, causing nervous disorders|{q raregeneral malnutrition. Magnesium occurs iir$;ealeafy greens (it's a part of cfrlorophyll).Copper is necessary for the pJoduction ofhemoglobin (although there is coqper in thevates iron in the synthesis of hemo$lobtq. Itabsorption of iron and the utilizatianerqia, depigmentation and degeneratculatqry system. A deficiency is rarecause it'salso)ms.andandSalted and sweetenod dilute orange juicc is good. Potassium irin _foo$Be and rarely leaves the cells except from prolonged diar.with it, eat plenty of beans, grahs, nuts, and driedcan also occur when water intake is excessive and noprovided. Thele is no thirst, just weakness. This can beyour food well in the summer when you are drinkingating. If you are working hard outdoon in the summer, you"gram of extra sodium for each quart of water past the firstcan't eat it, take so&um chloride tablets.table sa$, sodium comes in many foods, especially freshrpostly,comes as sodium chloride and is present wheretf "'"mostalso providethe bodyydu'are requiredenzytuemostly inWater is tho.rnost impoftant ingrefielt ih the body.and it's the"medium between them.' If transportswaste from them. lt's the mettiudr tltrough whichchanges iir the body take place. {t feguFates the bodyevaporation from the lungs ard$kin.'You loselungs, from peeing, and some from the inless than a quart a day in a moderatea day in extreme desertof food and *o+A;ink,oxidation of fatar qt sprouted\p\RMially.digested grain, and malt).lown through dipstioii:into mqnosaccharides, or simbreaksdown into glu'cote and frqctoSe, and maltoser found in grains, pot{ioes, beans and1 it turns to sugar. Al$tarches are 6ntheyrgach the bloodsfream. For gfains,here.ptydin, an enzivire in salivaiistarts).irtt's inturalandBui iFinee tain more unsaturated fatty acids (except coconut oil). Solid, or hydrogenatedvegetable oils such as margarine and shortening are more saturated,but if they are made only from seed oils, they do contain some unsaturatedfatty acids, especially linoleic acid. (Animal fat and butter are quite saturated.)Linoleic and arachidonic acids are considered essential and need tobe obtained from food because the body cannot synthesize them, as it canthe others, and tlie presence of one or the other is necessary for growth,healthy skin, and fertility. Linoleic acid can be synthesized into arachidonicacid, so it is actually the most important essential unsaturated fattyacid. A deficiency, although not known in adults, can cause poor growthand eczema in weaned babies fed a low-fat diet. Unsaturated fatty acids,especially linoleic acid, also help metabolize and eliminate excess cholesterolin the blood stream. Plants contain no cholesterol (a lipid), but thebody synthesizes it to form steroid hormones and bile. Too much cholestcrolalong with un excess of saturated fats can clog the arteries and causehcarl trouble. Linoleic acid occurs abundantly in soybeans, wheat germ,and srlllowcr. corn. soy, cottonseed, and peanut oils. Fats, especially unsaturrtcd.areunstable to oxygen and will turn rancid by oxidizing,so keeplhenr covcrcd. lf you deep-fry foods, don't let the oil smoke, strain itaftcr using. and don't use it more than twice. (When it smokes, it is be.WddrmlNThe word protein comes from the Greek word, proteioswhich means pimary. You are mostly made of proteinexcept for water and the mineral portions of bones. Itforms the muscles and skin, hair and nails, the hemoglobinof the blood which carries oxygen to the body;it forms en'-zymes and most hormones which regulate the metabolismand functions of the body; it'helps maintain the fluid balanceof the body and acts as a buffer for acid and base: andit forms the antibodies which protect you from unfriendlymicroorganisms.Protein is made up of smaller units called amino acids.Ther-e are twenty-two amino acids. In different combinationsand different numbers, they make up the differentproteins of the body. Fats, carbohydrates and proteins areall made of carbon, oxygen and hydrogen, but protein containsthe added element of nitrogen (and sometimesulfur).Your body can synthesize most of the amino acids. Theseare called the non-essential amino acids. The amino acidswhich cannot be synthesized by the body from nitrogenand other substances containing carbon, oxygen and hydrogenare called. essentiol amino acids. There are eight ofthem: tryptophan, threonine, isoleucine, leucine, lysine,methionine, phenylalanine, and valine. A ninth, histidine,is essential to babies for growth.The body's own protein is constantly being broken downinto amino acids and resynthesized back into proteins. Theseamino acids are not different from those obtained fromfood. Together they form the amino acid pool that servicesthe body. Some nitrogen is always being excretecl and someis always being added by eating. New amino acids areneeded to replace those already present and to form newprotein for growth and'healing. If the nitrogen lost is thesame amount as the nitrogen gained, the body is in ..nitrogenbalance". Growing children and pregnant and nursingladies should be in "positive nitrogen balance',-that is,more gained than lost.When you eat the protein of plants, which they makefrom the nitiogen of the soil and air, it's absorbed as aminoacids and resynthesized as protein in the tissues. To resynthesizein the tissues, the essential amino acids need to be ina specific ratio to the total amino acids (total protein). Ifan essential amino acid is missing, the other essential aminoacids that would make up the complete protein in the tissuesare unusable as such and are broken down into fats orsugars (certain ones go to fat and certain ones to sugar), andthe nitrogen is lost as urea or goes to form non-essentialamino acids. If a certain amino acid is lower in proportionto the total protein than it should be, the protein will resynthesizein the tissues up to the level of the limiting aminoacid (or acids), and the remaining ones that are incompletewill break down.The function of protein is mainly to provide for tissuegrowth and repair, but if the carbohydrate and fat intake(calories) is inadequate, it will be used for fuel. Carbohydratesand fats are called "protein-sparing" because theyleave the protein for its own special functions.Protein is not stored in the body like carbohydrates andfat. Some reserves can accumulate in the liver and possiblythe muscles. But these small reserves actually become partof those tissues, so the storage of protein causes the cells ofthose tissues to be larger, and protein starvation will causeatrophy ofthe tissues.Specific functions for many of the essential amino acids,aside from their necessity in the body's protein pool andfrom their general necessity for promoting growth and regeneration,are not known. Threonine may be important forthe utilization of fat in the liver. Lysine and histidine arenecessary for growth in babies. Methionine is a source ofsulfur for the body, and tryptophan is a precursor of niacin(60 mg. tryptophan equals I mg. B3).PROTEIN REQUIREMENTSThe following tables and figures for protein requirements are takenfrom the World Health Organization and the Food and Agriculture Organi_zation of the United Nations in their l9Z3 joint report on.protein requirements.Of all the material available on protein requirements, the repoit ofthe United Nations is the most complete and comprehensive of currentstudies worldwide. Age groupInfants6ll monthsChiHrenl-3 years46 yean7-9 yearsBodyweight9.013.420.228.rMale adolescentslG'12 years 36.913-15 yean 51.3l6f 9 yean 62.9Female adolescentslGl2 yean 38.0lll5 years 49.9lGl9 years 54.4Adult man 65.0Adult woman 55.0Pregnant woman,latter half ofPregnancyLactating woman,first 6 monthsSAFE LEVEL OF PROTEINSafe level of Adjusted level for proteinsprotein intakeof different quitity(g per person per day)(g protein (g protein 8Wo jO%@oper kg per perconperday) perday)1.53 t4 t7 20 23l.l9t.0t0.880.81o.720.60l6202530373820 23 2726 29 3431 35 4l37 43 5046 53 6247 54 630.76 29 36 41 480.63 31 39 45 s20.55 30 37 43 500.57 37 46 53 62o.52 29 36 4t 48Add 9Add 17"safe level" mcans that the figures given are 3Mo above the average requirement and,should cover the needs of the great majority of individuals."Protein of differing quality" means how the protein or combinations of proteins stackup as compared to mother's milk (or eggs), which is considered to be the highest qualityprotein available. (ln their 1973 report, the world Health organization cha'ngeo fiom aconc€ptual ratio of amino acids to this more nalural standard.) If grain is the primaryprotein source, it falls in the 6vo tange- A well-combined vegetable diei(beans, grains, nuti,seeds) is 7G80%. If soybeans and many other soy products are the main protein, it,s8G90%. (A meat diet is around 85-9@,.)The highest quality plant protein occurs in soybeans, wheat germ, rioe,garbanzo beans, sunflower seeds, millet, sesame seeds, spinach, and nutritional yeast. other beans and seeds, and most grains, and nuts contain allthe essential amino acids, but some are in small quantrties or their proportionsare not optimum, so they need supplementing with other foodsthat are high in thdo limiting amino acids, or they'need to be eaten in rargequantities. Proportions can be changed and missing amino acids can beadded by cornbining different plant proteins in the same meal. (you needto supplement within the same meal because your body will not hold overtlre extras for the next meal.)The main limiting amrno acids to consrder are lysine,total sulfur-containing amino acids, and tryptophan. Thelimiting amino acids in grains, seeds and nuts are usuallylysine, isoleucine and threonine; in beans, sulfur-containingamino acids and tryptophan (except soybeans). There areexceptions to this, so you can look at the amino acid tablesto get an idea of which foods to comhine.It's better to figure protein quality from combinationsof foods in meals, since combining certain foods increasestheir protein quality, and we usually eat our food in combinationsrather than alone, anyway.Most vegetable protein is only about g0-90% digestible.(Soybeans are 9Mo, brown rice is 95%). The digestibilityof wheat and beans is improved by long boiling.When combining plant proteins, the basic combinationis beans served with grains, nuts, or seeds. Grains also com-bine well with nutritional yeast. Beans and nutritionalyeast are very high in lysine, the limiting amino acid ofgrains. Almost any combination of different typep ofplant proteins will help fill in each other's gaps. Greenleafy vegetables have high-quality protein dnd can contribrrte a lot if they're eaten in large quantities.The concept of mixing vegetables for a complete proteinhas been recognized by different organizations concernedwith feeding folks. The United States Department ofAgriculture, in their Food, The yearbook of Agriculture,1959. said:We have come to realize that perhaps some of the chemical unitsthat make meat, milk and eggs superior foods for filling the proteinneeds of people may be supplied by skillfully combinrng certaan foodsfrom plant sources in special proportions. For that, our knowledge ofthe chemistry and requirements of amino acids is most useful.They also said:Our agricultural geneticists have been developang strains of the cerealgrains that will provide good sources of all the amino acids includinglysine and tryptophan. lt looks as if, in the near future, the cerealgrains produced by the everyday farmer wiil be nearry comprete toods inthat.they will provide the bulk of nutrients needed by man lor his dailyfood.It was said in Proceedings of the 6th International CongressNutition'Add n Add 13 Add ls From a nurritional point of view, animal or vegetable proteinnot be differentiated. lt is known today that the relative concentraAdd2l Add24 Add 28 of the amino acids, particularly the essential ones. is the most ifactor determining the biological value of a protein. . .. By combidifferent proteins in appropriate ways. vegetable irroteins cannot betinguished nutritionally from those of animal origin. The aminoand not the proteins should be considered as the nutritional units.Since soybeans have such high-quality protein, and somuch of it, they should be your main staple. Eat themthree times a week, as well as soy milk, soy cheese, and soyyogurt. Wheat germ and nutritional yeast should also beeaten regularly for their high-quality protein and B vitamins.The best kind of nutritional yeast is saccharomycescerevisiae, a primary food yeast grown in a molasse solution-It's yellow or gold from its ribollavin content, andavailable in powder or flakes. It tastes so good that youcan sprinkle it by the spoonful on your vegetables or popcorn.Since-we can get everything we need from vegetablefoodage, and since one can,t get very telepathic or higheating those who are so close, it seems obvious that being acomplete vegetarian is the kind and Holy way to make it. It's good to nurse your baby until he's around one yearold. You will usually nurse around a quart of milk a daycontaining enough protein to cover him up to five or sixmonths. Mother's milk has everything he needs, except it'slow on vitamin D and iron. The iron he carries from birthlasts until three or four months. At about six weeks. starthim on vitamin and iron drops, and continue them untilabout l8 months or whenever he's reliably into beans andvegetables.When you start feeding your baby solid foods, he maypush them out of his mouth with his tongue. He doesn'tdislike the food, he's just learning how to swallow solids.After he knows how to swallow the food, if he still spits itout, don't force him, try again later.Cereal. Processed baby cereal can be started at six weeksto three months. Start with rice cereal and add the othersgradually. During the next couple of months, ease into givinghim a lot of high-protein cereal (it's made from soybeans and wheat germ and is 35%protein). By five months,he should be eating high-protein cereal several times a day.This, along with your milk, is a trusty source of proteinuntil he's solidly into soy and other beans. If high-proteincereal is his only source of protein,3/a-l c. a day would fjllhis protein requirement. The processed cereals are also fortifiedwith several B vitamins and iron. You can add sugarto the cereal, or he might like a little salt or soy sauce.Fruit. Fruit can be given at two months. Applesauce isa good one to start with. Dilute orange juice with water atfirst. Wait a few months on strawberries because thev canbe allergenic.Vegetables. Strained vegetables can be started at threemonths. When you start to feed your baby strained vegetables,taste thent and dig them. Your prejudices about consistencyor taste are telepathic and obvious to your baby.Soy Milk. You can start giving your baby soy milk atfour or five months if you want to. Soy milk is a goodadded source of protein and will replace your milk whenyou wean him. Until your baby is about a year old, andespecially in warm weather, you should sterilize and can itup in the morning for the day. You don't need to do this ifyour baby is older and has ahearty stomach, and if it's veryfresh soy milk. Here's how you can lt up: l. Put fresh soymilk into clean botrres or jars, and lid or cap them loosely.2. Put them in a pot on a rack or something to keepthem off the bottom and fill the pot with water halfwayup the jars. 3. Cover and boil for 30 minutes. 4. Letstand or put the jars in cool water to cool the milk some.5. Twist the tops tight and put in the refrigerator or a coolplace. You can also feed your baby yogurt regularly at thisage. It adds to his good intestinal bacteria and will firm uphis shit. You can sweeten it or add fruit to it. puddinssmade with cornstarch are good too.Eryare different than processedStarches & Unprocessed Grains. Mashed cooked grainsbaby cereal and are harder todigest. They can be started at five or six months. Thinmashed rice or millet with a little liquid at first. Mashedpotatoes without the skins (until he has molars) are goodthinned with a little soy milk and some margarine. Forfresh cooked oatmeal. use the instant huskless kind. Atsix or eight months, babies like to gum toast or mild whiteflour crackers or cookies. Watch him when he starts to dothis until he learns to gum and swallow it right.Beans. At five or six months, start trying him out onbeans. Thin split pea soup is a good one to start on. Anybeans you feed your baby must be cooked unlil very soft,and mashed thoroughly or put through a sieve or baby foodgrinder. Add liquid to the mashed beans to make them asoupy consistency rather than a thick paste. Try one kindof bean at a time so you can see which get digested. Youcan tell whether or not he's digesting the beans by checkinghis shit. If the beans look mostly unchanged or it smellssour, he's not digesting them. If he gets diarrhea, stop thebeans and give him yogurt and a bland diet till it's together.Then try some other kind and put that kind off till later.You can try soybeans but they must be very very soft. lfthey are at all crunchy, they can give him diarrhea and asore red bottom. If well cooked, soybeans do well withbabies. Mash them through a sieve and don't use the skinsuntil he's older. Later you can maSh them with a fork orbaby food grinder and leave the skins in. If you use ablender, the skins are okay.llhite sugar is the best sweetener for your baby. Brownsugar and sorghum will loosen his shit. If he gets constipated(unlikely), give him some brown sugar in his food.Salt your baby's food lightly, especially in the summer.But don't overdo it.Nutitional yeasl can be added to his food sometimes.It's a good source for all the B vitamins except Bl2. Hewill get Bl2 from your milk, and later from the fortifiedsoy milk. If your soy milk is not fortified, give him th a 25-mcg. tab twice a week after weaning.Be sure your baby gets plenty of water or other liquids,especially in the summer. In the winter, your milk is enoughliquid if he doesn't seem to want water. otSHere's a spiritual reason for being a vegetarian: You can get ten times as much proteingrowing soybeans than raising beef cattle. If everyone was vegetarian, there would alreadbe enough to go around, and no one would be hungry. When you're cooking beans, be sure to cook them until're very soft. Crunchy beans don't make it. Mosts take about 4 cups of water for each cup of beans.ou may need to add more water if much of it evaporatessteam. It takes about 45 minutes for lentils, lpfor split peas, and from 3 to 6 hours for r4ost@er, excipt ioybeanswhich lake 7_ top*tr@1 {t&can be slightly reduced by" reo,leinf overffight.-are cooker is very economical of time and fuel. In aiure cooker, use 2 cups of beans with six cups of watersome salt. Add T+ cup of oil to keep the. beans fromrins and soutterins and clossins the littlt htte"'Cookthey only take anthe liquid turns to a grafoy between thebe soft before this point, thfirrbreak down a little and tevenly. After pressureadd the spices and simmerblend and the gravy todown and form a gravy.one between your toThis may take a long tbeans aren't good if theythey're really soft, they'reSome beans, mainly soybeansthe trypsin inhibitor. It inhibitive enzyme, and hinderstein. This "anti-enzvme factor" isboiling the beans foJ a minimum of 2%it (or pressure-cook for 30-45 minutes).ing of soy flour will cover this, as it's athe heat can easily reach.When toasting wheat germ or other grains,do it in a low oven slowlv. Lvsine and some Bcan be lost by toasting at high temperatures in a dryThese recipes which combine beans and grains are high-protein, aracid-matching recipes which will give l0-15% more protein than. ingredients eaten alone.&CHILE BEANS AND FLOUR TORTILLAS2 cups pinto beans in 8 cups water. Add 2 or 3 sliced onions anqpressed garlic. Cook about 6 hours. Toward the end, add:2 tsp. salt 2 tbsp. cumin2 tbsp. chile powderTortil las:4 c. flour % c. oil2 c. waterAdd:2 more cups f lourKnead a little. Roll out and cook on a dry frying pan or griddle.l% c. cooked pinto beans and 3 llour tortillds give 33.5 gm. protein at70% relative to mother's nilk.HIGH PROTEf N YEAST GRAVY- good over biscuits for breakfast1/2 c. nutritional yeast1/4 c.flour 2-3 tbsp. soy saucel/3 c. oil salt and pepper to tasteToasthe yeast and flour until you can start to smell it. Add the oil andstir it while it bubbles and turns golden brown. Add water, still stirring,until it changes to gravy consistency. Stir in soy sauce, salt and pepper.RICE AND DAHLCook 2 c. yellow split peas in 6 c. water with 2 tsp. salt, until thick andcreamv.In a small frying pan, saute 2 sliced onions until clear. Turn down theheat and add 3-4 tsp. curry powder. Cook the curry powder for a coupleof minutes with the onions (don't scorch it). Add the onions ano currvto the split peas. Add % c. vinegar and more salt to taste.Seme over rice with soy yogurt.SOY MILKAlexander and the Soy Dairy have instructionsfor making 3 qts. soy milk.Soy milk can be used in any recipe thatcalls for dairy milk. To drink it cold,add sugar and vanilla. lt also makesgood chocolate milk.CHEESEquarts soy milk stand in a warmthe curd has separated fromyellow liquid. Line aor largenvtontheTiewith 1 tbsp. salt and % c.)e (onion or garlic powder). Drainthe colander for t hour or oresstwo plates. T is goodcold orGRANOLA-aopeneggplanlmacaroni and cheeseinside tortillas with chile beansench i ladasmixed with noodles andbroiled in a shallow panftrglr protein cold cerealMix together:3 c. rolled oats1 c. wheat germI c. sunflower seeds (lightlytoasted)Granola, eontinued.% c. sesame seedsYa c. soy flour or powderI c. brown sugar, mixed iny2 C. watef% c, oil1 tsp. salt1 rbsp. vanillaToast in a 3500 oven (on 2 cookie sheets)for about 20 minutes or until golden.Turn often with a spatula so it brownsevenly. Makes 8 c.GLUTENand ktes toI c. whole wheat flour (about)or % white and % whole wheatwater (about)together for about 20.30the gluten. Soak thewater for about 2 hours.outand being careful to holdthen kng#it under water, kneadingall thethe gluten together. Change the wateruntll it comes almost clear. There shouldt 1Y2 to 2 c. of gluten.ROASTlump of gluten:% c. oil7a c. soysauce1 tsp. salt% tsp. garlic powder1 tsp. onion powder1/8 tsp. black pepper% c. walnuts, peanuts, almondsalmonds, etc.nuts, and if necessarv to mix allingredients, grind the gluten in a foodBlend all the ingredients and putin an oiled loaf pan. Cover with equal proportionsof oil, soy sauce and w31sy. gskgat3500for 1-l% hours. Baste if necessarv.FRIED GLUTENTake a lump of wet gluten and roll it intoa cylinder, Slice off rounds of it. Sautesome onions in the bottom of a deep potin oil. Put in the raw gluten rounds andcover with water. Add % c. soy sauce and2 tsp. salt, Boil for t hour. Thev willswell up. Take the rounds out and pressout excess liquid. Bread them and fryin oil or margarine. You could also grindup the rounds after they're boiled anddrained, and fry the ground gluten oruse it in loafs. Use the liquid in the potto make gravy.GLUTEN BURRITOStover roasted gluten in a pan withchopped tomatoes, and ainto large flour tortillas.c. green split peasI c. water2 tsp. salt2 chopped onions(2-3 stalks celery)74 tsp. black pepperCook together 1y2 to 2 hours, or untilcreamy.SOYBEAN STROGANOFFCook some soybeans and some rice.Serve on plates with the soybeans on topof the rice and a spoon of sovbutter ontop of the soybeans. Sprinkle on somevinegar, soy sauce and garlic powder totaste. Mix it up. This is a juicy way toeat soybeans-74 c. cooked soybeans and 74 c. cookedbrown rice give 30 gm. protein at about80-85% relative to mother's milk. MILLIGRAMS OF ESSENTIAL AMINO ACIDS PER IOO GRAMS FOODACEFOODProteinMother'smilk(l liter)(1,0OOS.\ Reference 119ffio.ugg".tedpattern, rs., Jii;;il-Soybeans ( I rounded c. cooked)Soy flour, fill fat,3/r c.Soy flour, low fat.l/t c.Soy milk,3/8 c.Black beans, % c. (thc.beans= lthc.Broad beans, % c.cooked)Cowpeas, }1 c.Garbanzo beans, % c.Kidney beans, % c.L,entils, 11 c.Lima beans, % c.Mung beans, % c.Navy beans, white, % c.Pinto beans, % c.Split peas, % c.Peanuts, % c.Peanut butter, 6 T.Peanut flour, % c.Baby cereal (l-l/3 c. dry =Barley I to l-l/3 c. wet)High ProteinMixedOatmealRiceBarley,th c.Corn meal. 3/r c.Corn tortillas, 3, 6%"Farina, I c.Gluten flour. % c.Hominy grits, 5/8 c.Macaroni, I c. dryMillet. % c.Ortrtal, I round c.Ricr. ll round c.Rye, % c.Rye flour,7e c.Wheat, hard spring, }1 c.Wheat, hard winter, % c.Wheat, soft winter, }5 c.Wheat bran. 2 c.Wheat flour, % c.Wheat germ, I round c.White flour, 7/8 c.Cottonseed meal. % c.Safflower meal, % c.Almonds. 2/3 c.Cashews. I c.Filbcrts. I c.Pecans. I c.Pumpkin seedslSesame meal.Sesame seeds,Sunflower meal.SunflowerWalnuts. INutritionhlc:"illyeast,Asparagus, cooked- |1Beet greens, cooked,Broccoli, cooked,Brussels sprouts,Chard, cooked,Cauliflower.Collards.Com. I ear dr lfT.,Cowpeas, fresh,Kale.3/q c. cookedLima beans, cooked,''SMustard greensOkra,8or9cookedPeas. 2/3 c. cookedPotatoes. I medium cookedSnap beans, I c. cookedSoy sprouts, I c.Spinach, }1 c. cookedSweet potatoes, I srnall bakedTurnip geens, }1 c. cookedl?:334.935.944.73.423.625.422.920.823.125.O20.724.42t.423.O24.526.926.15r.22.O2.46.22.51.82.9TRY THR ISO LEU LYS MET CYS230 620 7502ll 637 850t60 640 uo526 1504541 ts47673 t9265l t76242 801236 829220 901r70 7392t4 1002216 896195 980180 765199 928213 997259 945340 828330 803u7 157599426837l8562l33393l4S4201307468539226433367235 Il5l'3531393756066245799l159t0585t2s20542tt2263017513901593lll0I 195t3l2I 316l 199l35lr2l61306l 3801266122824tO467I 570s58628259545425E8t09225ll387to7tt24 9fi) 280 2701126 819 ,rcl 299ll20 8E02946 241430303773305206.2221r7ll5l8I20271872l8l635639052634l2l5127362l0l418100139265185103207t72l509lll763161071262tr1498512954332106352276233723l35I )124)ts4t(354S4l33527l287t792972962292043ll62 l16 #'a; &ffi35t773 It924462945s€ PHN TYR VAL HIST550 600700 739560 960I l9lr226ts27125619285g9572462384u241713.43s.2rs.216.56.612.89.25.810.941.48.712.8I t.4t4.27.512.1-I1.414.012.3to.212.o13.325.I7l:;l I *F59-39)'l445987629718891943241919512421057I 198l0l21275I 104r222lt67I l8l12702357E#746525788s7186103100r467lo 860551 950(incl.) 800r2t6 2005t25l 20621558 2568193 186551 1450687 1276678 1293692 1025891 r40l64 1360543 1298390 1444825 1298887 1395988 1372I 104 1532l07l 14872100 2916I 6457 662l 6597648833916434703042J 4I 68372itI 5lI 2:259t2l791633650738l36728682845s24631s94648570472552615l 36445324582446tt2415929345251679l53l885232sl 3549742553106ll0170193551441952315131844791089l274t07ll5ll0l3l135153300307(incl.)9ll937I 155t2l5597486925s9658548669543ffi9665670749727r425280908335348172239190t282689442033032402611262762602862512082802716872tot3259855174152882737ll76344r1006586405I 103362663r06l8488795310622474l30109294526493634 FOOD AND NUTRITION BOARD. NATIONAL ACADEMY OF SCIENCES_NATIONAL RESEARCH COUNCILRECOMMENDE DAILY DIETARY ALLOWANCES, Revi*d 1968Designed for the maintenance ofSood nutrition ofpracti@lly all healthy people in the U.S.A.Water-Soluble Vikminr.".. *o} C*1. ..,tr' +- ...f- .n .r,tr"-l n f a n i s 0 - 1 1 6 4 9 5 5116-l 12 7 ls 63rlz-t 9 20 72Children I -2 12 262.-3 14 3l3-,-1 16 354-6 19 426,8 2J 5t8-10 28 62Males l0 l? 35 1712,t1 43 95t4-18 s9 r3018,22 61 14122-35 70 15435-55 70 t5455-75+ 70 154Females l0-12 35t2*t4 44l4-16 s2l6-18 s4t8-22 5822-35 583s-55 5855-75+ 58Pregnancybctation7797ll4ll9lza128128t28.Se sction on protein rcquirements140t5tt7at75l7l142154l)/160163i63160157,r$r9'*t- ^td' .;$+.'" O"* 3C-22 kg. x 120 kg.x2-225 kg. x I l0 k8- x 2.028 kg. x 100 kg. x 1.881 32 I,100 2s91 36 1,250 25l0O 39 l,/tOO 30I to 43 t,600 30t2t 48 2,000 35l3l 52 1.,2M 4055 2,50059 2,70067 3,00069 2,80069 2,80068 2,60067 2,40056 2,25A61 2.30062 2,44463 2,30064 2,00064 2,00063 1,85062 1,700+ ?00+L0004.5506060b5655050553555552,0002,0002,5002,5003,5003.500r.See seclion on calcium rcquiremenls44{t400,lO0400400,tOO400400,fOO' J J5 J 54,500 400 20 ,tO5,000 400 2a 455,000 .1OO 25 555,000 400 30 60s;0@ - 30 605,000 * 30 605,000 - 30 604,500 400 205,000 ,100 205,0@ 400 255,OOO 400 255.000 400 255,000 - 255,000 , 255,000 - 2s6.000 400 300.05 5 0.4 a.2 0.2 l 0o.o5 7 0.5 0.4 0.3 1.5o.l 8 0.6 0.5 0.4 2.0lo 40 0.1 I 0.6lo 40 0.2 8 0.7ro 40 0.2 9 0.8lo 40 0.2 I I 0_915 ,l{} O.2 13 l. Il5 40 0.3 ls 1.2o.4 t70.4 l80.4 200.,{ l80.4 l80.4 l'7o_4 t4l.Jt.4I.J1.61.71.7t.740 0.4 15 t.345 0.4 15 l 4so 0.4 16 1.4so 0.4 l5 1.555 0.4 13 1.5ss 0.4 13 1.555 0.4 13 1.5ss 0.4 13 1.560 0.8 15 t.80.50.6 0.5 2.00.6 0.6 2 so-1 a.7 30.8 0.9 4t.0 1.0 4Lt 1.2 5t.3 1.4 5r.4 r.6 51.5 1.8 51.4 2.A 5r.4 2.O 51.3 2.O 5t.2 2.O 6' .$cf0.40.64.7o.a0.8o.80.9t1t.41.40.80.80.80.81.1 1.4 s !.21.2 1.6 s r.3t.2 1.8 4 1.31.2 2.C 5 1.3t.0 2.s 5 0.8LL0 2.0 6 0.81.0 2-o 5 0.81.0 2.o 6 0.8+0.1 2.5 8 +0.42.0 +O.5 2.5 6 +O.5-sF-1,!r.4-* ,i'd.l'
v4tffrt:lttr(lSoy milk is an easily digestible form of soybean protein.It can be made into whipped cream, sour cream, ice cream,cheese and yogurt. It contains the same amount of proteinas cow's milk, but less calcium and no cholesterol. Wemake 60 gallons a day for total cost of 301 a gallon.
We fortify our milk with a standarddairy vitamin mixture containingvitamins A.81, 82,D2,niacin,iron and iodine. We also addvitamin 812. Supplemental calciumcan also be added.Here's a recipe for making soymilk at home: Soak 4 cups of dryyellow soybeans overnight in coldwater. Drain and rinse. Grind to apaste in a hand mill or blender. AddI gallon of water to the paste. Simmeronehour in a large doubleboiler.,stirring f requently. Let it coolsome, bot keep stirring to keep themilk from"skinning." Strainthrough a diaper or cheesecloth tothe pulp. Wring out thecloth with your hands until thepulp is fairly dry. Add a pinch ofsalt. and sugar to taste. Yield: 3quarts.ln our soy dairy we use a slightlydifferent process. We grow or buyk soybeans, which are cleaned insmall clipper-type seed cleaner.we grind them into grits-alittle finer than cracked wheat.Next the grits are sifted to remove acertain amount of fine flour generated during grinding.We make milk in 15-gallon batches in a large propane-firedboiler equipped with a stainless steel stirrer driven by anelectric motor. lt takes 22 pounds of grits per batch, and eachbatch gets cooked at a simmering temperature for one hour. Duringcooking the grits soak up about an equal weight of water anddouble in volume. They're separated from the milk by pouringmixture through a basket centrifuge, which works on the sameinciple as the spin-dry cycle in a washing machine. Then themilk is cooled and stored in a 1OO-gallon bulk milk tank, whichhas a built-in refrigeration unit. Vitamins and a little salt areadded, and the milk is distributedto folks around the Farm the nextday in milk cans and gallon bottles.Our double boiler is made froma restaurant-sized coffee urn. Wegot it for $15 at an army auction.The basket centrifuge was madefrom an old front-loading washingmachine, also obtained at an auction($5). We removed the basin,spinning basket, outlet hoses, motorand drive belt, and built astand out oI oak 2x4 to hold it allas shown in the picture. The insideof the spinning basket is dividedinto three sections. We cut adouble layer of standard aluminumwindow screen to fit each section.These screens need to be replacedabout once a week.An alternate method, which wehaven't tried but which would surelywork, would be to use a press,such as an old wine or cider press,to squeeze the milk out of the pulp.The bulk milk tank we got usedfor $150. Fhe Tennessee Farmer'sMarket Bulletin frequently lists bulkmilk tanks for sale. Most states probablyfrave such a publication,and it would be a good place to look for equipment. Farm auctionsare another good place to look.Please write to the soy dairy if you have any questions aboutsoy milk, or stop by for.a visit and tour, and we,ll be happy togive you a glass of milk to taste.Love,Alexander & the Soy Dairy flour, corn meaf, rye flour, soy flour, home'rrade breakfast cereals, and homemade peanut butter are allsssy to produce with even a small'scale milling operation.Otd grist mills are not hard to find, and smaller models canbe purchased at not too great an expense. You can run thebig mills with a flat belt off your tractor if you're only goingto be grinding part time. lf you need to feed a communityof more than a hundred people, someone should become themiller for that villagp.We have a new rodent'proof mill under construction,with a concrete floor and concrete block in the walls up tofour feet high. The mill is designed for at least one largemotor to power a long drive shaft, with flat belt and v'beltpulleys affixed at various intervals to power the grinders,sifters, and cleaners, which sit along both sicles of the shaft.Before grinding soy flour you may want to construct asimple dehydrator to dry out your soybeans' For a cloggedmill, there's rK, remedy but to pull it apart, although some'times running through sorne dry corn is good for scouringit out.Seed wheat and rye, seed soyhans, and dry field cornc€n be purchased from a seed and feed supplier or a farmersco-op until you're growing all you need yourself. These areclean high-quality grains that generally have not undergoneany kind of chemical treatment.We make brcakfast cereals from'various blends of crackedwheat, cracked rye, corn meal, and soy flour. Rememberthat anything containing raw soy flour needs to cook atleast forty-five minutes, and clrn meal mush iust starts toget tasty at half an hour. Rye takes lonpr to cook thanwheat, so crack it smaller.You can grind and sift brown rice into rice cream forbaby cereal. We rnake peanut butter by running roastdpeanuts through an old super'market coffee grinder. ltcorrns out slightlY chunkY.Love,Patrick the Millerf ##fffWfNtwe use canning, treezing, and dry storageto preserve ourfoodage. We can all fruits (mainly apples and peaches); tomatoes(stewed, sauces, and paste), pickles, relishes, hotsauce, and sauerkraut. These are all high'acid and will preserveeasily Without pressure canning- We freeze most ofour vegetables: corn, green beans, peas, eggplant, okra,beets, and spinach. We put in dry storage winter squash,sweet potatoes, white potatoes, turnips, cabbage, kohlrabi,onions snd apples.Our neighbor ladies have been canning for many yearsand are always willing to give us helpful information whenasked. We've learned a lot from them, as well as from thebig iar companies and the Department of Agriculture.It's important to keep everything clean when you're can'ning. We heat and sterilize the jars and lids and pack hotfood into hot jars, cleaning the rims and sealing them.Then we process the jars in a water-bath for the recommendedamount of time.We also learned that it was important to blanch wgc''tables before freezing. This means to cook for a few min'utes with steam or boiling water and quickly cool with coldwater. The first year all the vegetables we froze raw cameout rubbery and tasteless. Blanching stops the enzyme pro'cess which helps the plant while growing but deteriorates itafter it's picked. lt makes a difference in the quality offrozen vegetables if they're young and tender and if they'reprocessed soon after being picked. Fruit is a different tripand freezes fine raw. We pack blackberries and strawberriesin sugor-4 cups berries to 1 flp sugar' We found they'rejuicier and tastier with sugar. Applesauce freezes fine too'We make sure we're loving each other and being good toeach other while we're working. The vibes you put intofoodage while you're preparing it affects the energy in itwhen you're eating it. The more stoned we are when weput up the foodage the better it tastes and the more wecan get done.- Mary Louise and Jeanne,for the Canning & Freezing CrewIf you honestly cafe about somebody being hungry besides yourself, you can tyour body.ff'ftt s,\\1 . :[Q: I'd like to know how to stay high.]Have real good karma. Have been generous with your energy for a long enough time,and it comes back like bread on the water. Establish good credit with a whole lot of folks.Be really honest in all your energy relationships. Tell people what you really see.Sit still and meditate and try to remember what it is you're doing: "Am I here just totickle myself? No, I gave that up, I ain't here just to tickle myself anymore. I'm here totry to figure out where it's at now. I'm here to try to help out." After a little of that,you'll realize that instead of sitting around like that you should be up and doing somethingand taking care of business. There's an old Zen saying that says if you get up in themorning and you don't know what to do, cook breakfast, eat breakfast, wash the dishes,clean the house . . . For openers, you know. And then the rest of the world. As you getone piece squared away, you can take the next size bigger thing and work on that, andjust keep going until you bog down or make ii.I never believed that you were supposed to shut up, mind your own business, and gethigh. I always thought, "What if this dude over here ain't getting off? What if this dudeover here's bumming? Ain't you tripping with him? Don't you have to do somethingabout that?" And so lots and lots of times when I was tripping I folded up my trip andsaid, "Well, I'll trip next tirne," and went over to try to get it together with somebody,and try to get them off, and found I got paid off for that with interest. Every time I didit I got more juice, and it made m€ stronger.w.).\ he Creation happens all the time in the here andnow as the sum total of the thoughts anddesiFland hopes and aspirations of a1i sentient beings'Not only mankind but other life forms. Each one of us iscreating what's going on around us' and whatever is goingon around us is the totality of what we're creating' Andsome of the creations that some of us do are so aberrantand far out and remote from any possibility of happeningthat nobody ever sees them-they don't materialize' Butthey're still there, and they're adding their influence to theoverall whole. Ifyou've got a hundred people looking at anorange and ninety-nine of them say it's an orange and oneof them says it's a rutabaga, we1l, it's going to keep on beingan orange, but it ain't going to be quite as perfect anorange as it would have been if all hundred folks said it wasan orange. Everybody dig that?Now I'11 tell you what it feels like to me;I want to see ifit feels familiar to anyone else. It feeis like to me that ifl've got agreement, I can do anything. It feels like if I amundivided in myself and am at harmony and at one withmyseif, that I can see the results in the whole world' lncludingnext week's Time magaztne. There's an old Japaneseaphorism that says, "While drinking my cuP of tea Istopped the war." And it's thought to be one of thoseweird Zen sayings that you can't figure out what it means'Now I feel that most folks here want to be in agreementbecause they know how strong it is. Some folks may notunderstand it, may not think it's important. The way wewant to make our basic heaviest agreements is not stuff wesay. Like the quality of grass varies according to the agreement,and if everybody who's smoking some gras says thatit's getting them high, it's getting them high. But if acouple of people are being ordinary, like not being high ornot taking the trouble to snap up and see that there is any,..,j"t tsrlrfile;&w:ffie zr,,'l.,*r.{a:Okay, if that's where it's at about oranges and rutabagas,where's it at about tractors and trucks and farms and statesand countries and things? Like there's this country here,and what it is is whatever most folks say it is. And if mostfolks say it's a fair shake, then it is. And if a lot of folksspell it with a "k" they can get it to be that way. Everybodyhip to that?high and pay attention to it, they can bring it down, becausethey won't put their agreement into being high.Well, keeping this whole thing high is not a question ofgrass. This is too big a critter to run on fuel. When we haveit it's nice. Sometimes we don't have it, but we can agreeto be stoned and be that way. lle got a little bit of ft nghtthen iust as a few people understood that. Now if'the agreements aren't made verbally, like in theexample of the grass I was just mentioning, they're madeby how you act and how you be, and ifyou act and be likeyou're having a good time, then you put your agreement inwith all those who want to do that. And if you act likeit's hard or if you act like it's a bummer, or that you haveto work hard or it's cold . . . You can have a room full often people and a crying baby, and if all ten people agreethat that baby crying isn't heavy, then it ain't heavy. Butifjust one ofthose peoplethinks that baby crying is heavy,it's heavy for all ten of them. You can attach importanceor value to anything you want to or put energy into anythingyou want to. Somebody told me about an old communityup in the next state or so-that these people cameand they had heavy agreements, and they built this beautifulcommunity, and now thatcommunity is no longer in existence.However, the buildingsthey built are still there, and thedescendahts of the settlers ofthat communily are still livingthere, but the community is nolonger in existence. All they lostwas their agreement. They saidthey keep the buildings up askind of a shrine.Everything that you do matters.John Donne says, "Everythingis at stake all the time."All this stuff about agreement isthat I want to see what's theagreement about how high wecan be. I watch sometimes andsee the ways we collect subconscious,and subconscious keep'sus from getting high. Peoplercalize thal there's a commitmentto do a spiritual thing here,but they might not know whatone is, so everybody is trying todo a spiritual thing the best waythey know how to do that. Andthe thing is, we have these oldbooks of instructions that have been passecl down to us forthousands and thousands of years, and we're out herecreating the Aquarian Age just like a husband alone withhis ultimately delivering wife and the Midwives' Handbook.We never did this before. We lried this before-us kind ofmonkeys tried this before, lots of times, lots of differentways. People drop by and they say, "Far out farm yougcit here." And I talk a lot about how if you're drivingfrom here to Nashville, it's however far it is from here toNashville, but if you're driving to Canada, from here toNashville ain't very far. They say, "You guys have athousand acres and five hundred people really integratedhere." But a thousand acres and five hundred people ain'tvery many-we want to integrate everybody in the Universe.We're going to be remembered for so long that it betterbe really clean, and it better not have any confusion in it,because we're sending a telegram down through time toourselves, and we don't even know if we're going to be ableto read when we get there. So it better be plain. Gettingstoned and reading scdptures is like talking to yourself on along-distance telephone, and you're saying, "What's reallyimportant now?" How to get cool is some of it. About athird of the religious writings of the world could be condensedto how to get cool. There's a place where there'sbeing cool, but you can't devalue the phenomena that'sgoing on. You've got to recognize it for its full weight.There's a man that I really love that I feel like is one of theheaviesteachers that I ever met, because in the middle oftelepathic phenomena happening just lavishly all over theplace-cheap, free, easy, "Here,take two of these"-this dudesaid, "It doesn't matter if it'shappening ten thousand times aday to millions of people everywhere.it's a Holy miracle eachand every single time." He said,"That I can know what's in yourmind, and you can know what'sin my mind is a miracle that thematerialistic scientists boggle at.Even ifit's happening to you andyou and him and him, even if it'sall over, it's still a miracle, eachand every time." And he refusedto let value go out of highness,even though we were just stuffedwith it. Whichwas what we werein San Francisco. We iust hadmore erlergy-amazing amountsof energy. And some folks quitrespecting energy, and we cansee their ships going off and hittingrocks occasionally. Thefolks that forgot to respect energyand forgot tt was life force,suddenly you don't hear aboutthem no more. Thev're notmaking noise in the astral thing.Anybody who has been with us for a while has beenseeing what we've been doing. We integrated San Francisco,and when we went to caravan around the country we integratedthe country, because it wasn't one thing before wedid that. People were shooting their kids, and it was apretty heavy trip, and it cooldd out a lot. It was like oil onthe water. Oil soothes the troubled water to the degreewhere a few gallons of oil can smooth out acres and acres ofwater. And that's how it is with good vibes. I don't knowwhat measurement good vibes come in, but a small amountgoes a long way.- Farm Meeting7 Februan 1972 ) / t ' v \ , 6 \ \ s > - \e believe in meditation. Every Sunday morni;g i"e rit and meditate for aboutan hour. when it's warm enough we meet outside, and we sit formal zazen andmeditate until the sun comes up, and then we chant the OM as it crests over the hill,because that gets us all together into one thing. And then I perform any weddings that wehave, because that's a real good time, when everybody's assembled and it's with the stonedwitness of the whole Church.) Meditation is learning to be quiet and shut your head off long enough to hear what else isgolngIon' And when you get quiet enough for long enough, you get so smart that you suddenlyrealize that you've never been that smart before in your life, and it's a much better mindthat you have access to than the one that you usually do. If you ever get in contact with theovermind, you know it, because it,s smarter than you've ever been."Perceiving in silence" is like recognizing that on the sound plane there is achieying non-actionthrough action just as there is on the karmic plane, or to achieve thoughtlessness throughthought as you can on the mental plane. It's to recognize that there is emptiness in formformis empty too, and that doing something is just like doing nothing. If you,re not uptightabout it. If you just dig everything that's there, and just see where it comes from and seethat it's all nothing anyway-or something-it's absolutely meaningful, which makes it thesame as nothing. The communication curve with the Universe becomes asymptotic, whichis where a curve comes up and goes up and up and up, and as soon as it's going straight upthat's all you can do on that graph. If it continues to accelerate from there, it startsIgoing backward and becomes meaningless on that graph. so as you go into communicationwith God and the Universe, it gets higher and higher and higher, and pretty soon the communicationreverses so excruciatingly that you rcally become the universe, and thenthere is nothing..€N IFTIhis week I've been thinking that I forget somel\times that there's a spiritual revolution goingon" And I understand that there's something like fivethousand communities now in the country. When peoplego to these communities they check them out for clean andsane and that kind of thing. But what's really interestingabout them and is the real big common factor is not whatthey say their religion is or that sort of thing, it's that forsome reason or another they felt that the main stream ofthe culture was so far removed from what was real and unconceptualthat they cut loose of it and went out to live insome place with muddy roads.I got reminded this week that I was a spiritual revolutionaryfrom reading a book about this yogi, and he's oneof those cats in India of which there's quite a lot. And hesaid, "India is our playground. It's the playground of thernsters, because we're the custodians of the divine plan,and no one will ever take it from us." And, you know, Iain't political, but I sort of felt the ghost of Che Gueterafor a second. One reason I use psychedelics is because Ifind open religious experience to be one step closer to thething than open Bible, which was a step closer to the thingthan having a Bible of Latin that only the priesthood understood.Which was back up a chain like that-you can go tothe experience and learn it for yourself.Now there's another thing in there about the psychedelicthing, which is that somebody who dogsn't know what highis can't tell if you're high, can't see any difference in you,might notice if your eyes get red. But there's an order ofreal experience that's as real and common and everyday tous as whether the sun shines or whether it rains or whetherthere's enough to eat, that's as easily and plainly discernibleas whether the lights are on or not, and that the rnajority ofthe culture doesn't believe in, has heard rumors out at theedges somewhere that there was something other than themeat part. And it reminds me and makes me remember thatas familiar as that is to us, we don't dare let it get ordinary-In San Francisco, when heavy psychedelics was at its peak,people were seeing stuff three or four times a week thatone shot of it should have went wham and just straightenedthem. They should have just got cool right now, you know,they should have said, "Wow, man." And they were sojaded from doing it a hundred times or two hundred timesor three hundred times that it didn't have any juice. That'sone of those places where "if the salt of the earth loses itssavor, wherewith shall you savor it." If the real religiousexperience gets devalued . . . you know. On a superficiallevel religion was like a fad in this country. It's also verypossible in this culture-in fact very probable on a statisticalbasis right now, though I think the odds are swinging better-tobe able to be born and be put uptight so quickly asto never have a conscious memory of not being uptight andbe kept that way for the rest of your life without everslowing down, until possibly you hit senility and you ain'tgood enough to work your computer no more. And thenyou may slow down-if you don't get plugged onto televisioninstead. If you were sloppy about your thing, youcould even do it on this farm. You could just keep yourselfon a trip, you could just keep yourself not slowingdown and really not taking a look at whatt going on, andkeeping yourself really involved in yourself, and forgetwhat we're doing here and that we've all put everythingwe have into it.When I was a dope yogi in San Francisco, every day wasSunday, and t didn't understand what Sunday was about,not having to hustle five or six days a week. Now I knowwhat Sunday's about. Sometimes I think that if possiblefolks ought to-maybe at some time during the week otherthan just Sunday-medirate a little bit, because it's like askill, and if you don't keep up with how to do it, it's likeskiing or something. And the thing about meditation is youcan go to a place where you know where it's at. I went to aplace this week in meditation where a whole bunch of stuffthat I was trying to resolve, I either resolved or I becameunattached about it, and one way or another I came outthe other end of it at peace.America has this Judeo-Christian tradition they talkabout, which works like this: Christiansay, "lt happenedtwo thousand years ago and you missed it." And the Jewssay, "It ain't happened yet." Well, I hear cats talk about,"Jesus is going to come in glory!" And they figure thatglory means motorcycle escorts, picture on the cover of theRoing Stune and stuff like that. man. That ain't what aglory is. See, this country's religion is in such sad shape itdon't even know its own religious words. A glory is anaural a glory is your field around you. And it says,"He'sgoing to come in glory." I think he's already present, righthere, in glory. And he's here for anybod! who can tuneinto it, and anybody who will raise his mind out of the dragof self-interest and raise it up to realizing that we're all onecan be in contact with that. And it's orr earth now, and it'smaking a tremendous difference on the planet.Anyhow it really feels real and immediate, and I loveyou a lot. Good morning.- Sunday Morning Service4 February 1973omehow we've got to be compassionate andkeep our sense ofhumor and don't get grim andcontinue to process all the karma. When you pick up a lotof karma it pushes you back along the line of developmentof your own ego. If you pick up a trip off somebody else,the way it manifests in you is your same old trip again.And if you notice that you keep going through the sameold trip again and again, then you ain't bailing yourselfout-you ain't availing yourself of the yogas taught on theFarm about how to unload that stuff. But you could startmaking some actual cumulative progress. You could comeback up faster when you get purhed back if you're ariareand know what you're doing, but if you're not awaie anddon't know what you're doing it takes you as long as it didthe lust time maybe. And you can just not make progress.If you know folks that are just not making progress it's becausethey're taking their own trip seriously-believing inthat stuff and thinking they aren't able to back out of it.The way we be in our family is it's like a football field, andeverybody knows what the fifty-yard line looks like, andthey all been back and forth across their personalitiesomanytimes that they recognize all those trips. And whenthey come across one they say, "Oh, I remember this one.I went through this one before."- Sunday Moming Service25 March 1973I =St,''.i:i.., s:'.11.;1.:liSts. N,'l it.$*:'ir.FTI f;;; T.'J::il*i:J;JH,'n i* ;il l:realize the truth and infinity of God, and that he was taken, up into that idea. just for thinking it up. And it says he hadno human master, but that the second man to realize Godconsciousness had in his universe the fact of the first one,and that if he didn't adjust to that, he was maladjusted.That's the basis of hierarchy.Then there was a Zen master who said that Buddha wasthe first man to realize and keep the religion of entightenmentin the history of man. Which is the same thing.However, Gautama Buddha said that there were prehistoricBuddhas, and that there was an unbroken string ofBuddhas forever. And in the same way that Christians havei,forgotten that every birth is the birth of the Christ child,it's been forgotten that every God-realization is the firstone, and that for everybody who realizes it, it's the sameone. And it says, ,4/ last you've come home, my son.:r:Or daughter.-sunday Moming Service29 luly 1973-"i-I ast week I talked about the idea of an exclusive|dupostolic succession, and I have something furtherto say on that subject: If a monkey is climbing down:.:rfrom a tree and lets himself down from a limb until hetouches the ground, or if a monkey walks up to a tree andreaches up and grabs a hold of a limb, it's the same thingand it don't matter from which direction he came. Right?::,..'One of the more popular religions in this country is notgoing to church while feeling a little bit morally superior tothose folks who do. And the preacheru who are responsiblefor that are the ones that preach formulas out of a book ortalk about things they don't understand or havent experiencedand in no way actually speak to the experiences ofthe people.Sometimes it seems to me like I know everybody who ishere really good, and everywhere I walk when I see peopleI know them. And sometimes I walk around and it doesn'tseem like I know anybody. And I used to think that had todo wiih how many visitors there were here or something,but I've since found that it doesn't have anything to dowith that, it just has to do with how well are.we coppingand where are we at and are we stoned and are we compessionate.The idea of exclusive apostolic successions is partlybased on a false idea of reincarnation which implies thatthe'ego is reincarnated. One might say that ego is reincarnatedbut not yours. And we all have to work fhat ego out"but it ain't ours personal, it's much more. And that falseidea of ego opens the door to such rank heresies as youcan't get it on in one lifetime. [t ain't a qus$tion of onelifetime, it's that you can get it on now. Anybody can.If you 'haven:t been paying attention, you might alreadybe on and not know it. It might even be better that way.If you could just don't keep that in mind and have agood time today, you wouldn't break the Sabbath. Goodmorning. God bless you.- Sundry Moming Semice5 August 1973ome people think that when a monkey reachesup and grabs ahold of a tree limb what he'ssupposed to do is pull himself up into the tree immediatelyand say, "Nasty old ground," and never come down again.Well, I heard a road chief say one time that there's going tobe a statistically equal amount of karma come down, andhe thought what you were supposed to do was pray realhard and maybe it wouldn't come down on you. And thenthere was a lady came up to a streetcar track in San Franciscoand turned to the young man next to her at the curband said, "If I step on that track, will it electrocute me?"And he said, "Not unless you put your other foot on theoverhead wire."I think when a monkey reaches up and grabs a limb thathe's supposed to keep his feet on the ground and hold on tothe limb and be faithful to both planes-don't lose his scientificmethod, do his material plane right, and stay faithfulto the spiritual plane and don't violate any of its laws.What we know mainly about that plane in a historical senseis the record of all the kinds of things that have happenedin the connections between man and God and Heaven andearth, but as far as this particular karma coming to us rightat this instant goes, it's fresh and new, and it hasn't beenpinned down yet, and we're creating it as we go along. Andso instead of the idea of trying to dodge any karma, theysay tHe Zen master uses no magic to extend his life. That isto say he accepts the karma he has coming fair and square.And we do that. We accept our fair share of the karma as itcome$, and we don't try to shuffle it around and make itcome down heavier on somebody else or lighter on us. Andhow we do that is this generation's connection betweenHeaven and earth and man and God.There's a way that you can be a materialist about spiritual things, and that is if you treat your stoned like it's amaterial, finite quantity of stoned, and that if you lose ityou've romehow been cheated or had. But this is the realsecret of the system: There may be a finite amount of goldand there may be a finite amount of iron, but there is an infiniteamount of Spirit, and if you ever lose it, relax and gethonest and remember why you're here on the planet at all,and get it together, and it'll come back, every time.Read some Holy Books today, talk about somethingstoned, get stoned-and remind yourself where it's at. Idon't think in terms of losing my energy, I think in termsof buying karma, and I feel like I'm a big spender from theEast for buying karma-I don't care how much I buy, I gotthis large holding company that helps me carry it. And youcan do that too. [f you're on the gate and you get somebodyuptight at the gate, just buy it. If somebody burnsyou in town, just buy it. If you make a dumb traffic mistakeand somebody rips you off, buy it, you know, andjustkeep on buying it. You can't sink yourself.Good morning. God bless you.-Sunday Moming Semice12 August 1973
*t,i ll,/
lf you're going to have anything to do with a materialsacrament, which is what a psychedelic is, it should be insuch a way that there is nobody interspersed between youand where you're going. That is, don't take anything madein a laboratory. If you're going to take anything, there'sglass and nrusltroottts ltrd peyote, which are the classicorgluic psyclredelics. We believe that if a vegetable and ananinral want to gct togctlter and carr be heavier togethertlrln citlrcr ortc uf thcm alone, it shouldn't be anybodyelse's business.We believe in psychedelics and that they expand yourmind, but all the rest of the stuff that beatniks take ismostly a social fad. Don't lose your head to a fad. Theidea is that you want to get open so you can experienceother folks, not close up and go on your own trip. So youshouldn't take speed or smack or coke. You shouldn'ttake barbiturates or tranquilizers. All that kind of dopereally dumbs you out. Don't take anything that makes youdumb. It's hard enough to get smart.There used to be a respect for consciousness, but quite afew folks these days are willing to put themselves in lowlevels of consciousness for temporary short-term feel-goods,such as quaalude, sopors, methaqualone, and that kind ofdope. For those of you who are maybe into taking downersor wonder where downers are at: Ifyou get high on a psychedelic,the worst thing that can happen to you is thatyou can drift back down to where you started. But ifyouget really dumb on downers, you ain't going to drift backup-you ain't going to drift back smart again, you're goingto have to hustle back smart again, and if you ain't beenhustling, you may not be as smart as you used to be.T[e1e's no excuse for taking any dope but to get high. Dope that gets you down, don't bother taking.
l DIl o ,1\I I r r\\Ib)-\t _n\ioa.,.\qiaWe play rock and roll becauseit's a medium of communication.and also because it's the churchmusic of us kind of folks. Rockand roll is one of the mystic artsof communion. It's supposed to fixyour head up so you're telepathic.It's one of the psychedelic arts.It's supposed to be, ain't it? . Arock and roll band can be like atransformer that can take 110 outof the wali and transform it throueha guitar player or something, andit becomes palatable for human beings,and it's really a kind ofenergyfood. At one time rock and rollhad a tremendous amount of juice,and it was because there was communionhappening in it. It was becausepeople were getting reallystoned and they were seeing thatthey were all one, for real. Thatwas what made rock anil roll heawfrom in front.
llF_I l\Ir::t.a...lMI come out with a rock and roll band to help make acommunion happen. That's what we come together for.People say, "lVhat's Stephen's 1sligie11, man, what's hedoing out thqre?" Well, it's built on the idea that there isa cornmunion that you can exp€rience that's the real thing.And it ain't built on ideas, it's built on experience.We tried to put our music together so that no matterwhere you were at it wouldn't bum you and it would putyou together and it would tell you the truth, and it's saying,"You're gonna do all right," and stuff like that. Andwe thought if we did stuff like that real loud it would begood clean mantras and good stuff to put in peopte's consciousness.All the stuff we're singrng is stuff that we wantto say to you, and we couldn't say it at aU if it wnsn't true.It's real stuff and it means real things.f:
u;{%{PYou see, we don't just come out torock out for you-we come to changeyour life. You could get stoned. Youcould make it work right. You couldmake it so it was a groove. You couldmake it so you know why you're here.You could make it so you're enough ofan adult that you could get married andmean it.See, this is a trap. We trap you withour rock and roll. When you get rightdown to it, we're a Salvation Army Band,and what we're out for is your gourd.You know how the military sends their recruiting officersaround to the campuses now and then and they tryto get all the people that want to be Marines and stuff likethat? Well, I'm a recruiting officer for reality, and I goaround to campuses, and I'm trying to recruit people tojoin into reality. I talk to anybody that wants to talk.We talk to rock and rollers on college campuses and inparks, in Christian churches and in beatnik rock halls, andwe talk to people in Greyhound garages when we get ourbus fixed and in truck stops when we stop there, and peoplein little grocery stores, and every time we stop at aplace like that all our people are out with Farm ReportsfaWhen I was teaching in San Francisco I sat lotus positionover on the side of the room, I always sat down, and I neverused a microphone. Had fifteen hundred, two thousandpeople without using one, and I didn't have a rock and rollband and I wouldn't wear white and I quit wearing glassesin an effort to get pure. Aqd then when I figured outwhere it was at some more, I got my glasses back, got amicrophone, got a rffiand roll-ba*dFhnd came out witha scenicruiser .agd, all. tliis gear fl4 e$uipment to. attractyour attention..':l.iand things, and they're telling the folks what's happening,and they're trying to get them so they aren't scared oflonghairs, and they're trying to repair some of the damageof how those folks have been scared. We try to cool outeverybody so nobody's mad, you know, and leave the thingstoned. That's worthwhile.#;;i1gs**'Jg;r;. There's other stuff I could talk about, but I have to saythis stuff before I can do that, because if I don't it will justbe back there in the back of my mind and I'11 be thinking itall the time anyway, so I might as well try to get it clear.That time I came out on the Caravan I talked a lot aboutviolence. Right now we ain't too much trembling on thebrink of violence anymore, but we're trembling on thebrink of stupidity. It took four or five hundred years forthe Roman Empire to fall, but we have better communicationsthese days. It can happen a lot quicker. And this is adecadent empire. The thing you do about a decadent empireis you don't try to tear it down, you'll get caught underneathit. Just stand back and learn how to take care ofyourself. Learn how to take care of some other people.Don't take over the government, take over the government'sfunction.I think it's time we got it together. I think that a lot ofbeatniks .are self-indulgent and lazy. When you go intotown and you want to find out what kind of a beatnikscene they have, sometimes you find the height of theirtechnology is roach clips and candles that melt all overyour bedspread. But we have to do something better thanthat. It really is a heavy thing to see so many people withnothing to do-to see how much work there is that needsto be done and so many people with nothing to do. Beatnikslying around on welfare and food stamps, man. It's aboggler.Don't take no welfare. Take care of yourself. Nobody'sgot free will that's on welfare. I don't care if you have telescopicvision and you can see through office buildings, ifyou're on welfare you ain't enlightened, you ain't cool,you ain't taking care of yourself. I think that if you'regoing to wear hair, you're going to be claiming that youknow something about how it ought to work. Ifyou knowhow it ought to work, you ought to be able to make it,And it ain't hard to make it. You can do it.A college education is no help on how to make it. Ithink a college education is almost as much of a bar toenlightenment as a naturally mean disposition. What you'resupposed to do about college in most cases is get out of itimmediately. The only difference between college andooJ-)welfare is social position. It's cold storage to keep you offthe labor market to don't embarrass the government. It'sartificial adolescence. I feel that colleges right now in thiscountry are the most gigantic, expensive, well-decoratedplaypens on the planet. A few folks might be learningsomething that's good for mankind, but most college studentsare wasting their time. And the level of intelligencethat it actually takes to do college work is so low and is sofar below a real human intelligence that it actually makesyou dumber to run your head across it, and the longer youstay there the dumber you get. Deep in your heart ofhearts, you know that it's a lot harder and takes a lot morepatience and a lot more character to tune a car than it doesto write an English paper. Unless you're doing somethingreal in school, the only reason you're there is for socialposition-either for the social position of being there or tolearn how to make more money to have a better job to bein better social position. And social position is caste system-nothingelse but.The thing is, we were raised by a materialistic culturethat doesn't believe in God and that taught us that socialposition and material gain is where it's at. It thinks thatbecause it's jaded. That's what's really wrong with it is it'sjaded. It's seen World War II and Korea and Telstar andJohn Kennedy's assassination, and it's seen the atomicbomb, and it's seen the possible end of the world, apd it'sseen giant shooting-down confrontations between Kennedyand Khrushchev-watched them walk down the dusty mainstreet with their six-guns. It's seen acid dropped all overthe place. It's seen whole cities tripping at a time. It'sseen revolution brewing right under its capital. It's seen abunch of stuff and it ain't impressed. It's bored and it'sjaded. The thing about being jaded is that the more it takesto get you off, the more it takes to get you off, the more ittakes to get you off, the more it takes to get you off. . .But you're supposed to be able to really dig seeing somecorn sprout. You're supposed to be able to dig having agood enough appetite to eat your meal. Simple things.When they talk about the Aquarian Age, they don'tmean there's a guarantee ticket that we're all going to getgroovy, but that there's enough energy there that if we alltry, we can do it. The world's at a place it's never been atbefore. There's enough communication that we could do iton purpose instead of random. I used to think when I firststarted getting stoned that if the world was really stoned itwould be easy. And then I found out the world is reallystoned, and you-have to learn how to trip. So I'm tryingto teach how to trip. And religion is discipline about tripping.We need to be not just a littldhigh or feel good, as areligious experience, but we need to get very smart and veryjust and very kind and very clear. And it requires disciplinewhile tripping to be that way.There's a lot of people around been telling everybody alot of things about the shape of the universe and where it'sat all and all that kind of thing, and you hear a lot of peoplesaying that you can't make it-you Just can't make it now.Well, you can. We've been doing it. We're making it, andyou can make it. Here's what I think is the biggesturn-onthat there can be: There is something you can do, and it'swithin your power to do it, and it makes a difference.Wow, man, what else could you ask for? Get a two-way radio, man,. ,,;i "iiand you can talk to us. You ' ,i .,can call us and we'll talk, ..j,to you and tell You how to '. :handle your sheriff and how.'.over the country. All thesenice toys we can play with:" on,.,Urur1This is the twentieth century. I'l;,'.'ii'!AMADIOWe use amateur two-way radio:, between our band bus and the;:,, Farm to stay in communica-:, tion when we're touring, and,,,i we talk with the Farm every, hour while we're on the:, road. Also our Farm stationwill soon be on the air all thetime. You can set up an amateurstation that is good enough to talkto us for around$200. For. . the most part we use thet: following frequencies (or as.i close to these as possible):.i,,i i 7,245 kHz on 40 m.,i' - 14,345 kHz on 20 m.,"" 3,945 kHz on 75 m., and21,445 kHz on 15 m.NI9bDNT H
Sunday Morning Service19 August 1973It feels almost too stoned to talk, but I keep thinkingthat the Japanese say it in a really simple way that's sosimple that it's amazing we don't understand it better.They say, Zazen is Buddha, Buddha is everything, this isenlightenmenf. Isn't that simple?Yesterday in the recording studio we played a tape ofone of the songs backwards, and it was such a groove thatwe listened to the whole song and it really turned us on andthe notes were very pure. It was a slow song,Easy Does It,and the organ notes would start and then stop strangely,and the cymbals instead of going ching werc going shlllup.We talked about it, and we said that from listening to itbackwards you could tell that it was religious, and that itwas universal music, and we thought it was so far out wemight even include it on an album some day. And then itoccurred to me that it sounds just as neat frontwards, onlywe're jaded on language and form. Now understanding that,consider that illusion and reality are one, and that you'rejust jaded on form and structure. That's what Jesus meantwhen he said be like a little baby and see like a little babyand let it all be new to you, because it's still a gas,In Sanskrit they say sangsara, meaning illusion, andnirvana, meaning ultimate attainment, which some peoplethink means all your plugs pulled, or a 98.6-degree bathwith a 40-decibel OM. But sangsarand nirvana are one:The odds against picking up a deck of cards and dealingthem off ace, two, three through all the suits arejust astronomical.However, the order they were in when you pickedthem up had the same amount of odds in it. And if all ofthese atoms were all lined up in a perfectly homogenoussoup, there wouldn't be any discernible form, and youwould call it void. However, the arrangementhe atomsare in already is as hard to get to as that other one. Andall of that is what you call the basis for the sudden school.It was on the basis of that information that Buddha said,"Avoid error." Buddha also said that the antidote for fearis courage, or in more familiar Farm terms, the antidote fornot getting it on is to get it on. Don't it come down tothat?When you get initiated into the Masons they tell you themagic words of all power which are known to all the heavymagicians, and without even having to initiate you or anythingI'11 tell you what they are right now. And those magicwords are so heavy that you have to be very careful whatyou put next to them. It's like the cat who met a genie,and the genie said, "I can do anything you want," and thecat said, "Make me a malted," and the genie said, "Zap,you're a malted." It's like that. And the magic words of allpower of the Masons are: I AM. And you should be reallycareful what you put next.It looks like we pretty much know where it's at. I thinkit's even against my religion to claim we don't. Good morning.God bless you all.
OI.D EEITTIIKPW@ffis0R01|lJ[lMode On The Fom - Summertown, Tcnn. 38483When we came to Tennessee, we were still using honey sowe thought we'd be beekeepers, until we found out it wasjust too heavy on the bees. Then we heard about sorghum.It's a light sweet syrup, and has been a Southern traditionfor generations. They used to use mules to turn the mill andcrush the cane, and cook the syrup in pans over a wood fire.But it takes a lot of field hands to strip the leaves off thecane and harvest it, and the cost of hired labor has grown sohigh that hardly anybody makes it anymore.When we heard about sorghum it sounded like just thething for a good all-purpose sweetener. We decided to plantsome sorghum cane and build a mill for crushing the caneand cooking the juice into syrup. We bought the equipmentand built a crusher shed and a mill. The layout and processwe used had been described in detail in a Government PrintingOffice pamphlet in 1938, and nobody had yet built andoperated a sorghum mill exactly according to those plans.They called for a split-level three-tiered mill that allowedthe juice to flow by gravity as it was cooked in two propanefiredpans.Every year we've cooked sorghum grown mostly onshares with out neighbors. They usually.grow the cane andwe harvest it and cook it down. Then they take their shareof the syrup or we buy it from them. The first year wecooked five hundred gallons of syrup. Harvesting took anall-Farm effort with everyone going out to the fields to stripoff the leaves and cut the cane. lt got us high to all worktogether at the same project for'a couple of weeks. The
f1ffiffit"fjri;iIffi:l:llf you're living on a piece of land anddon't know about the water, go to yourhealth department and ask them. They'vebeen in the area for a long time and knowthe land and can give you some good informationon how to keep your watelpure and safe.tt)e used to spend eight hundrcd dollms a rnnth on laundry. It was so outrageous thatwe decided it wus eosier to buy a laundromat. So we built a building, bought washers anddryen ond set up a laundromat ight on the Farm. The nice thing about it h that insteadof hning a dime slot it's got o light switch, and you go "click" instead of putting in adime to tum it on.We found that drinking out of an openstream all the time can give you dysentery,even if it's clear, because you can'ttell what it has in it upstream from you.So we went looking for springs to supplyour water needs. lf you have a creek onyour land, you can follow it upstream toits source, and where it starts to come outof the ground is your spring and is goodwater, providing there's nothing aroundthat could contaminate it like a barn oran outhouse or a drain of any kind.The county usually has pamphlets onhow to build a springhouse. We foundtwo books that have helped us get it together:The Village Technology Handbook,and Water Supply for Rural Areasand Small Communities, a World HealthOrganization book. lf you don't have anysprings on the land, a well would be yournext best source of water. There aresome types of wells you can do by handif your water table is not too deep. Oneway is the well point method: You canbuy a pointed tip that you pound intothe ground and add pipe as you go. Weknow little about this and have too muchrock to try and do it this way, so webought a used drilling machine and are inthe process of digging wells with that.We found that water is needed to sustainlife and the more that's at handmakes for a healthier life. Here on theFarm we're a crew of five men that spendour time doing new water lines so we canall have running water soon. We're usingplastic pipe for all our water pipes. ltl'' . works good if it gets buried; then it'so$...-n tt.tlo'i'good for a long time. The onlyrr*- 4 place we found it doesn't work well is a-bove ground and inside a building, be.cause sunlight and handling it are hard onit. So we use steel pipe there and it feelsmuch stionger.We've seen a lot of beatnik communi.ties that had low health standards. That'swhy the bathhouse was one of the firstbuildings we built-so we could stay cleanwhile we did our thing.- Paul & the Water Crew The motor oool is the center of our technology.We repair stuff.Some folks think they just can't keep amachine together and never touch one, butwe've learned that "being a mechanic" is ashuck, because working on machines and vehiclesjust takes paying good attention andkeeping high standards.Staying in good communication is howwe keep our group head together. Theaction can change fast from one day to thenext, so we usually have a meeting beforewe start work in the morning to sort outwhat, when, and how we're going to do it.As the group head's gotten smarter. we'vemanifested fbr ourselves a large motor poolbuilding with a lift, a pit. a parts department.a welding shed, and a dispatcher's office.We build and rebuild a lot of our equipment.Our compressed-air unit. for example, wasmade from a 1-hp refrigeration motor and anold air tank and compressor out of a schoolbus.A couple of our flatbeds were torched, welded and reconstructedfrom schoolbuses. We bought another flatbedfor a dollar-and it's still running. There's usef ul stuff lyingall over, in back yards and junkyards, and mostly it's justold, needing some oil and attention. When we get a goodagreement on what we need and get outlooking for it and talking to fofks, somebodyusually knows where one is.We have ladies to dispatch the vehicles.Folks who need to get somewhere talk tothem. They coordinate the transportationoff the Farnr with what vehicles we have onhand. We have a few late-model passengercars for town runs, doctor runs and the like;pickup trucks and flatbeds for heavy-dutywork; an ambulance, and a pickup truck forour midwives. Pickup trucks are rugged andwill last-they'll get you into and out of themud and the brush.We keep things going by keeping goodmaintenance records on each vehicle and regularlygiving it a lot of juice. Motor poolingis a far out meditation: You have to be yinto see what to do, yang to get that done, andunattached to the results.-Peter and Rupert,for the Motor PoolffiT*ffiWV@tu ilR\vRts)sm,( UH e believe that people's emotions are reallyl]comvfUJ mon to all people. You don't just have an emotioninsidi your head and you're the only one in there withit-you put out a vibration of that emotion, be it fear oranger or love, that other people sl'rare with you. But ifyou turn a bunch of anger and fear loose in the world, itjust bangs around in the world until somebody meets ithead on and chooses not to put it back out into the system.It'S like there's a ball banging around, and you cancatch it and throw it back at the next cat, or you can reachout and catch it and you can take it out.@WThe thing about anger is to remember that it's not necessaryand that it's optional. There's a lot of psychologiststhese days that say, "Oh, anger is part of the thing, youhave to let your anger out or it'll choke you up," or something.But it ain't like that. If you let your anger out itgets you in the habit of letting anger out-it makes youindulgent about letting anger out. And you don't have todo it, and it don't hurt you to not do it. It's good for youto not do it-it builds character, it makes you have astronger thing. Don't think you have an ungovernable temperor something. lf you've blown it at somebody, thenyou remember, "Oh, I wasn't going to do that no more."And then maybe you're blowing it at somebody and say,"Oh, I waqn't going to do that no more, and here I amdoing it." But there'll come a time when you'll rememberyou weren't going to do that before you start. And youremember before the adrenalin rush comes. And if you canremember before the adrenalin rush comes, then you canjust back off and don't do it.Remember that emotions are illusions. When somebodycomes on to scare me they can make my stomach turnwatery and make my knees turn weak and make all thatstuff happen to me, but I jqst happen to don't believe inthat stuff. Somebody can vibe at me and I'd raiher feelwhatever they vibe than make myself tight and yang andnot feel what they're doing to me, because I'd have to getyang with them to not feel it, and then it's so hard for meto let that go out again that I'd rather just let them do it tome. But I can be nonattached about what they're doing,and as soon as I walk outside their aura all that stuff meltsout of me, because I ain't doing it.Here's the thing about emotional hassles: Emotionalhassles are cheap! Emotional hassles? Wow, it's easier to.change your mind and don't have them. I don't believe inthat kind of wear and tear when you can just change yourmind and make it be a groove.You're not supposed to be a .i*PI don't believe in being uazy. Bul it's having been therein those places that lets me come on and say that. If you'regoing to tell somebody something, if you can tell them youdid that same thing and how you found out about it, thatwill help out. Psychology departments teach you thatcrazy is a mysterious disease that you have to be afraid of,and that's a crock. There ain't no such thing as crazy,and,if anybody ever fells you so, that's because they're ignorantand afraid. There ain't no such thing, because we all havefree will and we're all doing what we want to do. That,show you can tell what somebody wants to do, becausethat's what they're doing.I can see through insanity in one side and out the other.Anybody who comes to me qs crazy qs you can get andwants me to hetp them, and believes I can help them, I cancare. And I say that out front, because I've done it thousandsand thousands oftimes, and thousands and thousandsof people acrcss the United States know I can do that.Anybody who comcs to me, no nwtter how crazy they are,if they sincerely want to get well and be helped, I canhelp them and show them reality-if they sincerely want it.And if they don't want it, you iust got to say, "Free will,"and be as compassionate as you can.I think that schizophrenia is a moral problem. Andthat's a far out thing to say, but I see people make thewrong decisions and get nutty, and I see people make theright ones and come back. There's a thing about yourmind, which is that finally you can't blow it. You justcan't blow your mind. Which is why I'm not worriedabout getting crazy myself. I've been that way so manytimes and found my way out that it's just my back yardnow, and it doesn't scare me. I see people all the time inall states of disrepair and nuttiness shape up upon findingout that they can take care of themselves and they can bemasters of their own karma. There ain't nothing doing it toyou. You're doing it yourself. If it's not groovy, it's becauseyou're doing it wrong. And if you do it better, it'llget better.I'm saying that if you're going to follow the disciplineof, "As you sow, so you shall reap," you're going to have tomake value judgments about what is better ways to sow andwhat is worse ways to sow, and you've got to be a karmayogi. You've got to work it out in front of you, and thereis nothing you can do that will absolve you of the responsibilityof making accurate, moral choices forercr.If you be open, if you be really open, you can let a lotof energy come through. Like when you're traveling acrossa desert place and you look out and you see a little farm,and there'll be a well, and a house and a barn, and a littlecircle of green trees and green plants out in the middle ofthe desert, and you know that all those green trees andgreen plants were pumped up out of the desert by thatfarmer-all that water was pumped up, and the part thatdidn't get picked up by the trees and plants evaporated andwas gone and blew away to somewher else, but that farmermanaged to contain a little of it around him, and he made alittle oasis that way. And you can do that, just like thatanybodycan do that, if you be open and loving and reallyhonest and really spiritual. It's not so much like a set ofcomplicated directions as that you really know what beingcool really is. Don't you? I think the eleventh commandmentafter the first ten should be:nfluu in tuu hnuur urftut 3 npun.(,.^,..^:,r,.':.,,,,":,....'.'re just supposed to be. Se have a school. What we did was we hadpeople who had the right kindrof de-\ Srees,go to the state and get theircredentials, and then we made anarrangement with the countywhere they let us be our ownschool. They were glad to doit, because otherwise they'dhave had to buy anotherschoolbus because wet had so many kids,..1 and they were just asii.: happy for us to takecare of ourselves.And they made good arrangementsfor us-they let us get desks for Tlfapieceand stuff like that, and helped us set up our school. We went througha lot of teacher ladies, many of them very flower-childy, and finallyfound one lady who was a good decent lady who would give the kidsa bunch and would also keep them together and mix it with them andnot be afraid to holler at them or love them, and she has a Tennesseecredential.The Mennonites and the Amish people came from LancasterCounty, Pennsylvania, to southern Tennessee about twent/ years agoand kind of opened up the way for us in a couple of ways. One ofthem is about kids not having to go past the eighth grade, because theAmish kids don't have to. We made an agreement witn Tennessee thatour kids would be able to pass the eighth-grade examination whenthey were old enough to be in the eighth grade. Actually we couldgo further than that, and we have people who could teach it, butwhen we see some of these great big hulking longhaired beatniksabout six feet tall and about 160 pounds sitting in a school roomwhen there's work to be done, we say that most of them can go outand drive a tractor if they'd rather. So the oldest person we have inour school is about fourteen.We try to teach the kids the true facts about thgir planet, andnumbering and lettering systems that the rest of the population uses.We have to do that because we don't want them to be strangers intheir culture. And it takes about half of our time to cover the Teni*\*':'*$r:$nessee curriculum, and the other half of the time wedo like an apprentice trip, and the kids go out andlearn stuff-real things that's happening on the Farmandlearn real skills, including stuff like basic physicsand electronics. They're down at the motor pool fora while watching what happens, and they know howtrucks get fixed. Our young boys are vitally interestbdin how to fix tractor transmissions. They reallywant to know how long it'll take a wheat crop togrow. And they just share that information amongthem. They're really hot to know it, because thegrownups are interested in it, and what looks like it'sgood enough for the grownups looks like it's goodenough for the kids. And then school becomes notsuch a problem, because the kids want to know whatyou're into-where's the goodies, where's the action.If the whole farm went. out and cut cane one day orsomething, the kids would feel terrible if they wereleft out. That's where the action would be-theywant to be where the action is. The school's more tointroduce them into our life and not to educate themto some abstract standard.The thing is, we're like the Hopi Indians. It's notthat we have a life and then a religious life, it's thatour whole thing is all woven in together. And ourkids meditate with us. They meditate in school in themornings, and they take it serious. They sit quietly.They don't just assume the bodily posture, they getstoned. TF),IdtQ: What's our relationship to the material plane?lWorld without end, time without stop. I don't know anything aboutbeginnings, but once upon a time there was a ball of molten gaseous starthat didn't have anybdy like us living on it-it was too hot and tooetorile, and it was thsre. And maybe this is how it happened and maybeit was a little different technically, but maybe a big com€t or somethingcamc slooshing past that star, and it kind of sucked off a little glob of ittlrat orbited apund that star, and it cooled. And because matter in frecEpace assumos I sphericd shape iust like water goes into a round droplet,it got round, e$d it coolod over billions and billions of years. And in thecodlir4 tnoc€ss there was a gx€at releasing of chemicals and grcat electricalclrargus, and therc were various things came down from the original things.Whon it was part of a star it was juet hydrogen, atomic weight of one, andit got cornpticat€d, and all those hydrogens got built into other stuff likewatcr and oxygcn and rock and iron and aluminulrl, and that stuff slo$edaround, and eorre of it came through a process whele it becamp alive andbecasnc different from the rest of the stuff, because the rest of the stuffjust kept ruflRing down all the time. When it rained it washed all the stuffoff the mountains into the oceans, and everything followed entropy. Exceptsomething changed a little bit, and we don't know why or anything abouthow it happened, but something got alive. And we can follow the record ofthat in the rocks of the planet for the last billions of years about how some'thing got alive, and it got where it could reproduce. It changed and itspecialized, and it got to be where it had one cell, and it had more than onecell, and then the cells got speci4lizsd. And thc specialized cells got specializedinto organs, and the organs became things like feet and limbs and finsald scales Jd .y.r, and it klpt evolving, "hangng and growing, followingits own natunl law, until part of it looked back at the planet with its eyesthat it gr€w out of the muck and dirt of the planet, and it said, "I wonderwhat it is." And it thouglrt, "How could it be?" Part of dead matter gotalive and looked back at dcad matter and said, "I am that, and I am not thatbut I am that." And part of this rock got smart enough to think about arock. They say, '*Ig God conscious?" And wc say, 'Some of Him is.- Hett'ssome of Him that's conscious."I Here's how enlightenment works: Itanswers your childhood dreams in vourown terms, in the way that you understandthe best, in the systertrt thatvalue the most, and itthe way that you wantthe questionsswered, and it's tof waking[Q: AreI don'tbutthingbeenbeendon'tfused oiionble*rJr*riiaffifftrpqlffiessencestops,justwant tointothere isthatain't fitoo. Imeverything foryou find thatin wanting that, thffiyou knowyou is your good will, and[Q: Can you change youtation?]yogaand you can sit andyou high isverse.soof ,it)o"[Q: Could you explain about people using their ownyang creative energy?]I throw my energy out, and it,s like throwing out ahandball, and it comes back, sometimes with a little morewhen I threw it out. Sometimes I get chunksing this way that I didn't even throw. AndI put as much on them as I can andming stuff I don't originate. Itthe Farm, and sometimes Iand I have to come andwithout me knowingto ask if itout that Idn't tS me, becausebnergy, and it'd just said, "Yeah, I'mwg*I my colors,'1-Sgl alltoaabeabe{oggtherhow$ad*can'tyour thing tot and taketh.4qrr thing,on their ownopen circuitAnd it's theion passes into be in allfor the technique isAnd they say that when aZenbf mind with a student thatthey /santhere without any content init- b6th are telepathic and neither of themlsinto it. And that teacher was supposedto have had that happen with his teacher, and theresupposed to be an unbroken line of pure mind likeback to the man who shook the hand ofhundred years ago. Transmission of.tltul.d satori. A state ofpureness shaffismisi"n of anY message-no mr\q6fthe Universe, And soand pick.,r{i yogas so.,rntiOh as youstate of :con$giousness wherg, ypu can learn lrow to crffffi'yogas on your own as you go along, and keep your enerup all dal, long, Thatls why.I don't,say that ttre:9s$11ty5atationl\S ilerson does ought to be a half hour a dayfi::i':1l|,13'iTiiMdiftisalearnlotchange.You seen one Buddha, you seen'em all.
There's a difference between love and lust.Love is always cool, lust ain't ever.Part of being a householder yogi is that you believe thatyour children are part of your immortality-that yourchildren are another chance for you to do it right. Weain't celibate yogis that go up on a hilltop by themselvesand don't have anything to do with ladies. We be yogisand yoeinis together in our families and believe that workingit out with our kids and raising them to be sane andhonest is a heavy yoga and that if you can turn out a kidthat's pretty sane, that's heavier than writing a poem..".r#rt:;ffiWhat tantric yoga is about is that males and femaleshave different signs on their electricity, like positive andnegative. They both have energy, but the signs are differ.ent. The result of that is you can take an uptight man andan uptight woman and let them share the same energy andthey can both be made refreshed and relaxed-both ofthem, because the woman's minuses and the man's plusescan cancel out just like an algebra equation cancels. youcan work fatigue and uptightness and subconscious out ofthe system just by really sharing your electricity with somebodyof the opposite sex that you're in love with.
this is a fulfillment, you should stop then. There's awestern tendency to once in the saddle never stop. Muchwestern loving would be stopped by the eye of truth.Tantric loving thrives on the eye of truth. When peoplelook in each other's eyes with the eye of truth when they'retantric loving, it turns them on.It's like filling up a bathtub-once the water's on, wideopen all the way, you still have to wait a while, and onceyou get vibrating good and you're feeling good, hang inthere and do it for a long time, and it cycles your energyand it'll heal you. You can wake up in the morning andhave a flu or something and not feel like getting out ofbed,and make love properly and put yourself on your feet,feeling good and able to go out and make it. And you'resupposed to heal each other that way, and you can, andthat's what holy matrimony is about: Holy matrimony isthe tantric yoga ofthe Catholic Church.On the Farm our marriages are till death do you part,for better or for worse, blood test, the county clerk, andthe works. When we got to Tennessee, almost none of mystudents were married to each other because I hadn't beenable to marry anybody and they didn't know anybody theywanted to marry them, so they were just being together.And we got there and we said, "What does it take to be apreacher in Tennessee?" And they said it takes a preacherand a congregation and you're a church. So I didn't have todo anything as such, I'd just send a couple down to get ablood test and go to the county clerk's and get a marriagelicense, and they come back to the Farm, get married onSunday morning, I sign it as a minister who marries them,and they go back in and they're legally married. Andthey're all morally married, too, because we get marriedafter the meditation in the morning, when everybody'sreally stoned and everybody's in a truth-telling place, andyou say those vows, you know, that you'll stay with somebodyand that you really mean it, and there's four hundredfolks digging it and paying attention and pretty stoned andpretty telepathic with you. It's a heavy ceremony-we getstoned on weddings when we have them. Sometimes folksare so heavy at weddings-people say their vows so heavyand so pure it just stones everybody.cop to love wherever you find it, and don't quibble.
floor and give me one of those numbers, what I do is I goover, pick him up, hold him by the legs and, step by step,walk him over to the thing I want him to do, take his handsand then we do it. And I do that until they say, "Aw, I'drather do it my own way." It's funnier than a spanking'You got to do it every time and don't let one pass orthey're going to eat You alive.e believe in staying in contact with our kids.We have a baby girl who has been responded toevery time she ever said anything. If she said something,somebody said, "Huh?" If she looked you in the eyes,people looked back and admitted they saw her too. We alwaysassumed that she could see, from the moment shewas born. You stay in contact with them, and they're partof your family and they be with you. They don't grow upand run away and grow their hair long when they get six'teen or something, or in our case cut it. They'll stay homeand grow their hair long, and help you out with the thing.We tell our kids where it's at. I think the idea of lettingkids go crazy until they're six years old and then puttingthem in public school where they have to snap right now,you know, is a funny way to treat a kid. You ought to try :to keep them sane and together. You have to tell a kid ifhe's doing something dumb and destructive-he's got tolearn about that. Some people don't think you ought totell anybody that. I think you ought to tell grownups ifthey're doing something dumb and destructive, not justkids.It ain't just a question of how you do it, it's a questionof understanding what you're trying to do. What you'retrying to do is to don't teach a kid to be a ritroff' lfsomeone gett to be a rip-off, they can keep it up for therest of their life. Maybe you've got to put out some juice.I see folks that want their kids to do things because theysay so, but they ain't willing to put out as much energy asthe kid's willing to put out to change the situation. Some'times with, say about a two'year-old, if I ask the kid to dosomething and they won't do it or just flop down on theIf a kid stubs his toe or hurts himself, don't come overand say, "Poor baby, poor baby," and all that stuff, becauseit makes him think it's a big deal. It ain't a big deal,and they have to learn better than that. Don't cop to akid being afraid. If a kid's afraid, don't say, "Poor baby,"because that reinforces him in his fear and makes him thinkthere's something to be afraid of, and there ain't anythingto be afraid of. You have to try to keep kids up, and tryto teach them good principles about which way is up, andwhat you do that goes in those directions.They talk about those flying saucers that make thosesquare corners at a thousand miles an hour? That's howyou ought to be emotionally. You ought to be able to juststop crying. Little kids can. We expect a kid to be able toget cool. We say, "Get it together," and they really do it.Anybody can. It's not a question ofyou don't let them dotheir emotional thing, it's that you teach them how to handleit as it comes along.
I'oqAnybody who's in their rightconsciousness you can teach, and ifthey're in their subconscious you gotto train them until you can teach them.Training them'is like helping them tofind their way out of the maze. Thenwhen they're out of the maze you cantalk to them. But you have to say,"No, not that way, you can't go thatway, thatis a dead end. Keep goingthat way, keep going. No, no, not thatside, no, can't be that one." I've beeothrough a lot of changes. At first Ithought it was okay to spank kids, andthen I thought it wasn't and then Ithought it was again. Because I sawthat the amount of damage to the overalluniverse that is caused by slappingsomebody on the ass is much less thanthe amount of damage to the universethat is caused by letting somebodygrow up crazy. I think you shouldcommonly rassle your kid aroundenough so it ain't a sudden shock toget physical with him. The thing aboutgetting physical with your kids is thatyou shouldn't hang back and don't domuch, be too civilized to ever haveanything to do with him except whenyou punish him or discipline him orsomething like that, that's a weird situation.But if you're thick into thebod all the time, and sometimes it's apat on the ass and sometimes there's alittle juice in it, and it's all part of thesame continuum, and it's not like aseparate thing, it's like you're talkinglike a mother lion. If a cub bites amothgr lion on the tit she goesWp. She don't lay him open oranything, or hurt him, but she lets himknow to don't bite. Also you canholler with good vibes. You can reallyget loud if there's no anger in it.Kids have a hard time copping tosomebody who ain't their biologicalparent. I think if you're going to comeon to a kid you need a limiting device,so as not to come on to him too much.So I say his mother ought to come onto him physically-his biological motherwho is breast compassionate withhim. Single cats shouldn't never spanknobody else's kid. Never, never. Thatcauses a lot of beatnik trouble. Andyou can't spank or get very physicalwith little kids, like under two years.A kid might get spanked a bunch oftimes by his mother as he grows up,and maybe a couple or a few times byhis father right in the transition stage,and then he ought to outgrow everhaving it happen any more.Trying to get a kid to cop is nottrying to make him say yes sir, no sir,please, or nothing like that. It's tryingto get him to just look out his eyeholesand recognize you. That's all he'sgot to do to cop. He don't have to saynothing, just look out and recognizeyou, that's all it takes. Just so youknow he's in there. They say an applea day keeps the doctor away-havingyour attention attracted once a daywill keep you from getting crazy.
Don't get emotionally involved with your kid rippingyou off. Because kids do that out of a tropism-it doesn'tmean they're morally bad or anything, it's just that it'spretty, it's energy, it's nice-they want some. One way isto try to make it so that everybody is included, and thenpossibly you could work it out so they could get some andthey're satisfied with it. If it's a case where a kid has justgone outlaw and says, "l want it all," which happens sometimes,don't get emotional, don't get mad, pick him up andcarry him away from the energy source. !fyou be hung upabout a kid crying and pay attention to the crying then he'sgot the weight of your attention behind his trip.If it comes to it, just stash the kid in the bed if they'iethat .size. Step outside and hang out in another room.Keep an ear cocked, see when they quiet down. See ifyou can walk back in without them starting up again. Ifthey start up again, walk back out again. If you want toteach a kid to don't cry, using a bedroom like that, you'vegot to be willing to walk in and out of that room and openand close that door a whole lot of times. And you can'tjust think you can walk in and say, '.you be good," andjust do that and go about your business. Don,t feel uptightif a kid makes you deal with him a whole lot of times in arow-think of it as a lot of opportunities to put a teachingdown. If you put the same teaching down a lot of times ina row the same way, the kid will learn it. It's not puttingthe kid in solitary confinement or anything like that. Whathappens is, if you isolate them from the energy a little bit,they run down, and quit doing the trip and get interested insomething else.
's an interesting thing: Deer babies react to a1 compasslonate wrhings like a rustle in the brush or a noise like that, but the mother, and if they're compassionate with the moahuman baby responds to ruffles in the vibrations, not the ther they can be compassionate with the baby. The ladmaterial plane, and the baby don't care what's going on inthe material plane as long as the mother's vibrations arecool. And the mother can be sitting on the carriage at asawmill nursing the baby, and if she's cool the baby can beand the baby are really a unit, and the father can be a unitin that, too, of his own free will, if he wants to be that cool.The last kid that Ina May and I had is over a year old now.When she was really new I used to make special occasions tocool in that situation. But if the mother's uptight or the be really.quiet and meditate with her for a few hours everymother's in an uptight situation, that's what makes the now and then, so I could feel her head when she was youngbaby cry. That's what the baby really feels, and that's telepathic,and that's why we feel that it's good for babies tobe raised by their real mothers. We don't go along with thedestruction of the family idea and that kids are supposed tobe all socialized by being raised by a whole bunch ofotherfolks. My real opinion about it is.that that makes crazykids. The biological mother has certain interior psychedelicsthat her body manufactures to keep her stonedenough to match speeds with her kid, so she can be asstoned as her kid is and relate withher kid. And she's equipped to dothat- There's hormonal changes, andyou get stoned on hormones-they getyou heavy. So there's a relationshipbetween a mother and a child that'srealer than just conceptual, that's reallyvibrational.like that and know that part of her. I really took care tomeditate with her and get high like that, because as she gotolder she learned a little language and a little ego and a littletrick here and a little game there, and then they're just kindof in a place until they get adult enough to have free willenough to choose to be cool on their own. And there's aplace between Buddhahood and enlightenment that's a littlerocky. You're born a Buddha, and you can get back to it,and you don't really quit being it, you only think you do.tlil.ffi -,s' 'Folks should be careful what kind ofvibes they put on anew baby. You got to be really gentle with young kids. Akid under a year, year and a half, is part of his mother'saura and should be able to be with her. If the kid's got totake a nap now and then or the kid's on some kind of atrip and sd you got to leave him in his bed to cry it off foran hour or two, that kind of thing is cool. But in generalyour kid ought to have access to you every time he comescn to you. I say all that stutT tiecause I l'eel about childrenthe way that, according to the Bible, Jesus felt about themtoo. Which is, better to have a millstone tied around yourneck than that you do something wrong to one of thosechildren.If you're a parent, you have accepted the karma of anotherhuman being who is too young to fend for himself formany years, and for whom you must be responsible until heis able to fend for himself. If you don't come up witheverything you've got to give him a fair shake, which isan upbringing that gives him a reliable, accurate ideaof the Universe, then you've short-changed him.
.::,:itt ,.* :.$r,'...,i!;'l:is r* .--*t&-Tir s-a_'s.hs.!" ;;;;,_.l."*.: i'.i-.TO PROSPECTIVE MIDWIVESSpiritual midwifery recognizes that each and every birthis the birth of the Christ child. The midwife's gig is to doher best to bring both the mother and child thr-o;gh theirpassage alive and stoned and to see that the sacrament ofbirth is kept Holy. The Vow of the Midwife has to be thatshe will put out one hundred per cent ofher energy to themother and the child that she is delivering until she is certainthat they have safely made the passage. This meansthat she must put the welfare of the mother and child first,before that of herself and her own family, if it comes to aplace where she has to make a choice of that kind.If you're going to be a mi Any midwife and any doctor ought to be able to cop tothe Hippocratic Oath. Many people may not know that itcontains specific provisions, such as:I will teach this Art if they would leam it without fee orcovenont.I will give no deadly drug to any, thouglt it be asked ofme, nor will I counsel sucllEspecially I will not oid a woman to procurr abortion.PRENATALCAREPrenatal care is important for keeping close track of thephysical and spiritual welfare of the mother and baby. Youought to give each lady a monthly checkup, starting fromthe third month of pregrancy on up until the last month,after which the checkupe strould be weekly. You'll needa certain amount of equipment in order to be able to giveadequate prenatal care, all of which is available at a medicalsupply house or from a friendly doctor.You should have:a fetuscopea blood pressure cuffa good set of scales for measuring mothers'weightsa good set of infant scalesa stethoscopea watch with a second handa copy of Handbook of Obstetrics and Gynecologt by Ralph C. Benson.Get someone who knows how to show you how to takeblood pressure readinp.The handbook shows how to take pelvic measurementson pages 82 and 83. This is something you should do as amatter of course for ladies who havenl had a baby before.The bi-ischial diameter is big enough if it's 8 cm. or more.The diagonal conjugate of the pelvic inlet, or DC, shouldbe ll.5 cm.Blood pressure and weight should be'recorded at eachcheckup, and the lady's pee shonld be checked each timefor protein and glucose. Ifthere is excessive protein in peeduring the first or third trimester, check with a doctor.The protein reading shouldn't go above +30 without youchecking it out. (The protein reading and the blood pres.sure reading give you a check on the mother's kidney function,which is quite important to both her and her baby'shealth during her pregrancy.) The presence of glucose inthe mother's pee might mean diabetes, and this should bewatched for, as it means that she could have a larger thanuzual baby.The baby's heartbeat can usually be heard from sixlnonths on, and this should be recorded at each checkup.When you first start hearing the heartbeat, it may be asrapid as 150-150 beats a minute, and at term it's usuallybetween l2O-l4O beats a minute.The mother's weight gain strould be about 20-25 poundsduring pregnancy. Some ladies, if they're underweight tobegin with, can add 30 pounds without it being an excessiveburden to their system.If the mother's blood pressure gets higher than 130/90,you should be in communication with your friendly doc:tor. Generally, though, vegetarian bean-eaters don't getinto trouble with high blood pressure, toxemia and otherailmentc that hassle pregnant meat-eaters. In one hundredand six pregrancies pregnancies at the ihe Farm so far. far, we've had no tox.emia or diabetes and had only onlv one case ofhigh hiph bloodnres-presrptin check by having the mother lie downfor tlte last two mbnths of hersure, which we keptpregrancy.Ilatch_for anemia; ladies whose hemoglobin is low gettired easily and have purplish-grey bags under their eyes.Be sure that the mother is eating right, making especiallysure-that$e is gelting enough protein. (See Margaret'sarticle on Foodage) hegrant ladies should take prenatalvitamins and usually need some amount of iron and cal.c-rqm; ask your friendly local pharnucist to help you withthis.Check the mother's belly each time; feel how high heruterus is, check the growth of the baby, amount of-fluid,presentation and position. You should be able to feel thebaby's head between your thumb and fingers by puslring injgst abgve the mother's pubic hair. If you doh't feel itth91e, feet around above the mother's beliy button. Ifyoustill aren't, sure about the baby's position, get sonrbodywho is more experienced to help you out.Kirow each mother's blood type, and in the case of Rhnegativeladies,_ have thcm checked at a local hospital forthe presence of antibodies in their blood each month untilthey deliver.Grass is okay for pregnant ladies, but they should takeno heary-psychedelics because of the risk of starting prematurelabor.Pregrant ladies should drink plenty of fluids and shouldget a moderate emount of exercise. Naps are good. [ovemakingis okay if it's tantric and gentle. If the mother'swa'ter bag breaks, lovemaking should stop so as to not introduceinfection into the uterus.Hip your ladies to how hormones work: Basically whathappens is that different hormone levels change duringpregnancy and in the weeks just before and after childbirth.Hormones are as heavy consciousness-changers aspsychedelics. If a lady knows that she's tripping on hormones,she doesn't have to get emotional or weird outwhen she feels a change in her consciousness. She canqlalk it up to hormones and make life easier for her family.The two or three days before a menstrual period, and f6rsome ladies the day of ovulation, are days of hormonechanges, and a lady might have to yoga more. than usualon these days too.
l. Have the mother lay down with herlegs and butt elevated, and keep herwarm.2. Gently massage the mother's lowerbelly to cause the uterus to contract..You'll be able to feel it contract. Donot push the uterus toward the mother'spuss, but massage it. It should gethard.3. If the bleeding doesn't stop rightaway, you should press frmly justabove the mother's pubic bone withthe side of your han-d, or have someoneelse do this while you give themother a shot of Pitocin or' someother oxytocin. Hold your handthere as long as necessary. If you'reapplying enough pressure, you'll seethe bleeding slow down and stop.4. Hold the uterus if it tends to relax.Stimulating it by squeezing it gentlyreminds it to keep contracting.After the placenta is delivered, checkthe mother's puss to see if she'll needany stitches. If she hasn't torn or if thetear is very slight, put a couple of sterilepads over her puss.Stitching the TaintThe rudiments of stitching are thatyou stitch muscle to muscle, fat to fat,and skin to skin. Since it's very difficultto learn stitching from any written instruction,I recommend that if you'regoing to get this far into the gig of midwifingthat you have someone who knowshow to stitch show you how to do it.If the Water Bag Does Not BreakIThis is a condition that you ought to be able to recognrze.The water bag usually breaks open during labor andfluid gushes out. Ifthe bag doesn't break during labor, thebaby may be born still enclosed in the membranes. If thishappens, remove the bag from the baby's nose and mouthso he can breathe.Starting ProceduresTo Use if the Bqby Does Not Breathe Right Away_ .Th9s9 starting procedures are necessary for only about57o of deliveies because 95% of babies will start qpontaneouslyor with very slight stimulation when they're born.karn these starting procedures by heart. They aren'tneeded often, but when they're needed, it's life and death.l. Suctiln lhe b-aby's mouth and each nostril as previouslydescribed.2. Pick up the baby by his ankles.3. Slap him on the soles of his feet once or twice,enough to joggle his bod.4. I'ay him on his side with his head lower than his bod.$uny-ou1fingers quite vigorously several times upthr b-uby'! spine from butt to neck. The baby maybreathe when this is done a few times. If he doesn;trespond, begin the next step.5. First make sure that the baby's airway is clear by suctioningout any mucus again. Using mouth-to-mouthresuscitation, breathe gently into the baby's nose airdmouth. The baby's lung capacity is much smallerthan yours, so you shouldn'i blow a lot of breathinto him. Do this rapidly several times. If the babydoesn't breathe aftei several puffs, keep trying. ifvou can't feel a pulse after a minute a?ra i nitf ortwo, begin cardiac compression. Thisis done by placing one hand behindthe baby's back to provide firm support.Use the index finger of theother hand and press {irmly on thebaby's breast bone right between hisnipples. The idea is to squeeze thebaby's heart between your finger andthe baby's spine. Do about twopresses a second until you've donethis about twelve times. Then give thebaby several puffs of air into hismouth and nose, and then go backto cardiac compression. Keep alternatingcompression and mouth.to-mouth.Resuscitation continued for fifteen totwenty minutes has saved babies, withoutbrain damage.7. Keep the baby slightly warm, but nothot, at all times. This helps him tokeep some of his bod energy on.Abnormal Deliveries and ComplicationsMost deliveries are normal. Youshould be able to recognize it ifthere's a'problem though. Here's a list of thingsyou ought to be able to recognize:l. Breech presentation2. Prolapsed umbilical cord3. Excessive bleeding (already discussed)4. Limb presentation (foot or armsticking out of puss)5. Inverted uterusIf . -1, 2, or 4 should come up, youruld try to get the mother to a hospital.KathrynMcclure,midwire,andciraceilJ.:t*f l#,tij,ll;#S:ffi:1ffi }X'l#:liif you can't get there in time:l. Have the mother get into the birthing position.2. Let the butt and trunk of the baby deliver spontaneously.3. Support the baby's legs and bod as they're delivered,lettiirg the legs ilangl-e astride your aim, with youipalm under the bod.4. Thg head usually comes out on its own. Sometimes,though, the head doesn't deliver within three minutesafter the delivery of the waist and upper bod.5. If the head takes longer than three minutes, youshould create an airway for the baby to breathe, ashis cord is compressed by his head in the birth canaland he isn't getting any blood by this route. Put themiddle and index fingers of your hand along thebaby's face with your palm toward his face. Put yourhand in till you reach his nose. Push down on themother's puss so his face is clear until his head isdelivered.holapsed Umbilical CordIf the umbilical cord comes out of the cervix or pussbefore the presenting part, it's called a prolapsed card. Thebaby is in danger of suffocation for the same reason as in abreech delivery. Here's what you should do:l. Lay the mother down with her legs and butt eleyated.2. Put on a sterile glove and put your hand into herpuss. Push the baby's head up to allow blood toflow through the cord.3. Don't try to replace or press on the cord.4. Get the mother to a hospital.Prolapse of the umbilical cord happens only about oncein two hundred advanced pregnancies. Limb PresentationGet the mother to a hospital.Inversion of the UterusPut on a sterile glove and pustr the uterus back insidethe mother. Follow the above mentioned measures to stopexcessive bleeding.Twins.-IVhen the_first baby is born, tie the cord to prevent possiblehemorrhaging from the second baby via the umbiilicalcord. Delivery otherwise should be the same as for a singlebtuth.hemature Labor_ Try getting the mother drunk enough to stop rushesthere has been no show or dilation of the ifcervix. Alcoholis a downer, which is why it's good for stopping labor.Premature Water Bag RuptureIf the mother's water bag should break before laborstarts and while she's lying down, have her stand up atonce even though her water will leak out. The presentingpart of the body may gravitate over her pelvic opening andkeep the umbilical cord from washing down lirst, as itmight if she stayed lying down. Sometimes labor doesn'tstart for days after the water bag breaks, and this is okay ifthere's no infection present. We check for this by takingthc mother's temperiture every day, which should stay ai98.6uF. and by listening to the baby's heartbeat, whichshould stay at a rule between l2O and 150 beats a minute.No making love after the water bag breaks.hemature Babies- Keep him warm at all times. The smaller the baby is,the more important this is. A serviceable incubator can beprovided by wrapping the naked baby in aluminum foil,leaving his face-out,_until he can be got to a hospital. Keepa premature baby clear of blood and mucus in his mouthand throat. Make extra sure that his cord stump isn,toozing any blood. If it is, tie it again so that it dbesn't.He needs all of his blood. Remember he's more susceptibleto-infection than a larger baby, so be very careful nbt toinfect him.Tending to the Baby$spe_cl th9 laby carefully to see how all his equipmentworks. Watch his color to see that he stays good ahd pink,notice if he has a good startle reflex, notice if he breithesw^ithout sighing or straining. With a sterile, soft cloth, wipeoff any water or blood, but leave most of the white cream(vernix) on the baby's skin. Trim the ends of the cordstring.- Put -some{c9h9lon'the end of the cord stump andaround the base ofthe belly button. Put on the diapei andthe, kimono and wrap him in the second receiving blanketwith a corner over his head to keep it warm. put i drop of-silver-nitrate in-each eye on the white part. Syringe out thebaby's nose and mouth often to clean out muius.The baby should pee and have his first shit-which willbe greenish-black and sticky-within twenty-four hours afterbirth. We put the baby to the breast as soon as we'resure that both he and the mother are okay. When he'stwelve hours old, we give him some boiled water in a bottleor eyedropper and repeat this as much as he will take ituntil the mother's milk comes in. The baby should sleepon his belly. This allows him to drain any mucus or fluihthat.he might spit up without choking oh it. Except forthe baby's face and hair, use cotton bills and oil to cleanhis skin for the first week, or until his belly button is completely.h:etd.Keep putting alcohol on the cord stump andaround the base until it's completely healed and dry. iheckthe belly button now and then to-see that it doein't smellfoul. If it does and seems sore, see a doctor right away.Warm water is okay for washing f4ce and hair. Use oil dnhis head and forehead if it seems to need it. Some amountof jaundice is common in newborns, appearinq on the thirdor fourth day. Have the baby checked by i doctor if it-do9sn_'t clear up within a couple or three -days, or if thebaby has a fevei with it.Coe of the Mother and BabyYou should always remain with the mother for at leasta1 flourfollowing the delivery. See that the mother getsall that she wants to drink. I*ave a lady you can trusi tostay with the new parents and baby to keep the mother andlabyqure-t and well taken,care of for a coirple of days. Beforeyou leave, you should unwrap the baby and chi:ck thecolor of his extremities, ears, and lips, and check the umbilicaltie. His ears and lips should be pink. If they'rebluish, you should have a doctor look at liim. Be sure ihathis cord stump is not oozing any blood.
i"l.;-rl4',$s.,ii.,.:..i";,,#,:"'rt'l-- 'risaiah Kanies, b'$nn Ar-ngrist 5" i972,5; tr5 a"rn.,ctt fhe Famlt. E llb. i I o;,r"
sStephen ot},tONOAY NlgHT CLASSPolitics and Acid 5l4l7OPraise and Blame 2116170Stephen-Farm Meetins 716173Stephen-Columbia, l'i.ol S'l ZS ll ZceaaffiD W,STEPHEN IIl FARM BANDu]}$shtncRECORDED ON THE FARMHey Beatnik! This Is the Farm Book by Stephen and the Farm@ 1974 the Book Publishing Co.Printed on the FarmAll these can be ordered.from The Book publishing co., The Farm,summertown, Tenn. 38483. wholesale prices available to bookstoresand on orders of five or more of any item./Y(/l \ || tW tt Y t Y/( Y ffifflll)\ MDI tWl t tW Eo ,.$ r ; = | 2019-10-17 02:49:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2702716290950775, "perplexity": 8771.316281253588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672548.33/warc/CC-MAIN-20191017022259-20191017045759-00011.warc.gz"} |
http://health.maxosoft.com/b7zq6/79bcb8-inverse-matrix-2x2-calculator | As you can see, our inverse here is really messy. Matrix calculator. You can input only integer numbers or fractions in this online calculator. To calculate inverse matrix you need to do the following steps. Select the matrix size: Please enter the matrice: A-1 . To find a 2×2 determinant we use a simple formula that uses the entries of the 2×2 matrix. Step 4: Press the Inverse Key [$$x^{-1}$$] and Press Enter. To view this video please enable JavaScript, and consider upgrading to a Pour calculer la matrice inverse, vous devez faire les étapes suivantes. Site: http://mathispower4u.com Remplis la matrice (elle doit être carrée) et ajoute lui la matrice identité de la même dimension qu'elle. This inverse matrix calculator help you to find the inverse matrix. Enter the numbers in this online 2x2 Matrix Inverse … In this tutorial we first find inverse of a matrix then we test the above property of an Identity matrix. The 2x2 Inverse Matrix Calculator to find the Inverse Matrix value of given 2x2 matrix input values. Matrix A =. Calculate the magnitude of the first matrix use the formula a*d-b*c. Calculate the inverse matrix using the magnitude and the formula above. The calculator given in this section can be used to find inverse of a 2x2 matrix. 2x2 Matrix. 2x2 MATRIX INVERSE CALCULATOR. This video explains how to find the inverse of a 2x2 matrix using the inverse formula. Matrix Calculator 2x2 Cramers Rule. The easiest step yet! 2x2 Sum of Two Determinants. Enter the 4 values of a 2 x 2 matrix into the calculator. As a result you will get the inverse calculated on the right. A matrix X is invertible if there exists a matrix Y of the same size such that X Y = Y X = I n, where I n is the n-by-n identity matrix. Sometimes there is no inverse at all Multiplying Matrices Determinant of a Matrix Matrix Calculator Algebra Index web browser that If in your equation a some variable is absent, then in this place in the calculator, enter zero. Share Share Share. Use our below online inverse matrix calculator to solve 2x2, 3x3, 4x4 and 5x5 matrices. Free online inverse matrix calculator computes the inverse of a 2x2, 3x3 or higher-order square matrix. Verify the results of 2x2, 3x3, 4x4, nxn matrix or matrices addition, subtraction, multiplication, determinant, inverse or transpose matrix or perform such calculations by using these formulas & calculators. Verify the results of 2x2, 3x3, 4x4, nxn matrix or matrices addition, subtraction, multiplication, determinant, inverse or transpose matrix or perform such calculations by using these formulas & calculators. Matrix calculator. OK, how do we calculate the inverse? Next lesson. if you need any other stuff in math, please use our google custom search here. eval(ez_write_tag([[300,250],'calculator_academy-medrectangle-4','ezslot_16',107,'0','0']));eval(ez_write_tag([[300,250],'calculator_academy-medrectangle-4','ezslot_17',107,'0','1']));eval(ez_write_tag([[300,250],'calculator_academy-medrectangle-4','ezslot_18',107,'0','2'])); This should follow the form shown above, with a,b,c, and d being the variables. Use the “inv” method of numpy’s linalg module to calculate inverse of a Matrix. Multiplying a matrix by its inverse is the identity matrix. All you need to do now, is tell the calculator what to do with matrix A. It is used to find the determinant to 2x2 matrix and 3x3 matrix step by step. By using this website, you agree to our Cookie Policy. Apart from the stuff given above, if you need any other stuff in math, please use our google custom search here. 2x2 matrices are most commonly employed in describing basic geometric transformations in a 2-dimensional vector space. Compilare i campi per gli elementi della matrice e premere il rispettivo pulsante. Calculating the inverse using row operations: v. 1.25 PROBLEM TEMPLATE: Find (if possible) the inverse of the given n x n matrix A. Well, for a 2x2 matrix the inverse is: In other words: swap the positions of a and d, put negatives in front of b and c, … A square matrix is singular only when its determinant is exactly zero. The calculator will find the inverse of the square matrix using the Gaussian elimination method, with steps shown. At this stage, you can press the right arrow key to see the entire matrix. The calculator will find the inverse of the square matrix using the Gaussian elimination method, with steps shown. Why to use a 2×2 matrix? Finding inverse of a 2x2 matrix using determinant & adjugate. This video explains how to find the inverse of a 2x2 matrix using the inverse formula. To calculate the inverse of a matrix in python, a solution is to use the linear algebra numpy method linalg.Example A = \left( \begin{array}{ccc} Reduce the left matrix to row echelon form using elementary row operations for the whole matrix (including the right one). Verify the results of 2x2, 3x3, 4x4, nxn matrix or matrices addition, subtraction, multiplication, determinant, inverse or transpose matrix or perform such calculations by using these formulas & calculators. The easiest step yet! Vous pouvez entrer des entiers relatifs et des fractions de la forme –3/4 par exemple. And you'll see the 2 by 2 matrices are about the only size of matrices that it's somewhat pleasant to take the inverse of. SPECIFY MATRIX DIMENSIONS: Please select the size of the square matrix from the popup menu, click on the The 2x2 Inverse Matrix Calculator to find the Inverse Matrix value of given 2x2 matrix input values. 3x3 Sum of Three Determinants. The Inverse matrix is also called as a invertible or nonsingular matrix. Online calculator to perform matrix operations on one or two matrices, including addition, subtraction, multiplication, and taking the power, determinant, inverse, or transpose of a matrix. Here 'I' refers to the identity matrix. Since we want to find an inverse, that is the button we will use. It does not give only It is given by the property, I = A A-1 = A-1 A. It does not give only the inverse of a 2x2 matrix, and also it gives you the determinant and adjoint of the 2x2 matrix that you enter. 2x2 Matrix Multiplication Calculator is an online tool programmed to perform multiplication operation between the two matrices A and B. Matrix Inverse is denoted by A-1. eval(ez_write_tag([[300,250],'calculator_academy-large-mobile-banner-2','ezslot_19',192,'0','0']));eval(ez_write_tag([[300,250],'calculator_academy-large-mobile-banner-2','ezslot_20',192,'0','1']));eval(ez_write_tag([[300,250],'calculator_academy-large-mobile-banner-2','ezslot_21',192,'0','2'])); In this case, (ad-bc) is also known as the magnitude of the original matrix. Here you will get C and C++ program to find inverse of a matrix. A 2X2 matrix is a tool used to help gain insight and outcomes in a dialogue. Set the matrix (must be square) and append the identity matrix of the same dimension to it. It is a matrix when multiplied by the original matrix yields the identity matrix. To find the inverse of a 3x3 matrix, first calculate the determinant of the matrix. This is the currently selected item. More in-depth information read at these rules; To change the signs from "+" to "-" in equation, enter negative numbers. Here you will get C and C++ program to find inverse of a matrix. La plus facile est la méthode des cofacteurs qui nécessite au préalable de calculer le déterminant de la matrice, mais aussi la comatrice C (qui est la transposée de la matrice des cofacteurs) : M − 1 = 1 det MtcomM = 1 det M tC. To obtain the inverse of a 2x2 matrix, you will require following a few steps: Swap the numbers in (row 1 & column 1) and (row 2 & column 2) Give opposite signs to the numbers in (row 1 & column 2) and (row 2 & column 1) Now, finally divide by the determinant of the native matrix. If the determinant is 0, the matrix has no inverse. The inverse of a matrix is expressed by A-1. eval(ez_write_tag([[728,90],'calculator_academy-medrectangle-3','ezslot_8',169,'0','0'])); The following formula is used to calculate the inverse matrix value of the original 2×2 matrix. Thinkcalculator.com provides you helpful and handy calculator resources. At this stage, you can press the right arrow key to see the entire matrix. Next, calculate the magnitude. Réduire la partie gauche de la matrice en forme échelon en appliquant les opérations élémentaires de lignes sur la matrice complète (incluant la partie droite). The following matrices can be inverted: - 2x2 matrices - 3x3 matrices - 4x4 matrices Best math tool for school and college! All you need to do now, is tell the calculator what to do with matrix A. Practice finding the inverses of 2x2 matrices. Solving linear equations using elimination method, Solving linear equations using substitution method, Solving linear equations using cross multiplication method, Solving quadratic equations by quadratic formula, Solving quadratic equations by completing square, Nature of the roots of a quadratic equations, Sum and product of the roots of a quadratic equations, Complementary and supplementary worksheet, Complementary and supplementary word problems worksheet, Sum of the angles in a triangle is 180 degree worksheet, Special line segments in triangles worksheet, Proving trigonometric identities worksheet, Quadratic equations word problems worksheet, Distributive property of multiplication worksheet - I, Distributive property of multiplication worksheet - II, Writing and evaluating expressions worksheet, Nature of the roots of a quadratic equation worksheets, Determine if the relationship is proportional worksheet, Trigonometric ratios of some specific angles, Trigonometric ratios of some negative angles, Trigonometric ratios of 90 degree minus theta, Trigonometric ratios of 90 degree plus theta, Trigonometric ratios of 180 degree plus theta, Trigonometric ratios of 180 degree minus theta, Trigonometric ratios of 270 degree minus theta, Trigonometric ratios of 270 degree plus theta, Trigonometric ratios of angles greater than or equal to 360 degree, Trigonometric ratios of complementary angles, Trigonometric ratios of supplementary angles, Domain and range of trigonometric functions, Domain and range of inverse trigonometric functions, Sum of the angle in a triangle is 180 degree, Different forms equations of straight lines, Word problems on direct variation and inverse variation, Complementary and supplementary angles word problems, Word problems on sum of the angles of a triangle is 180 degree, Domain and range of rational functions with holes, Converting repeating decimals in to fractions, Decimal representation of rational numbers, L.C.M method to solve time and work problems, Translating the word problems in to algebraic expressions, Remainder when 2 power 256 is divided by 17, Remainder when 17 power 23 is divided by 16, Sum of all three digit numbers divisible by 6, Sum of all three digit numbers divisible by 7, Sum of all three digit numbers divisible by 8, Sum of all three digit numbers formed using 1, 3, 4, Sum of all three four digit numbers formed with non zero digits, Sum of all three four digit numbers formed using 0, 1, 2, 3, Sum of all three four digit numbers formed using 1, 2, 5, 6, Ratio Rates and Proportions - Concepts - Examples. On each end of the spectrum designers create a matrix of 2×2 with opposite features (i.e. Calculate the magnitude of the first matrix use the formula a*d-b*c. Finally, calculate the inverse matrix. A matrix has many purposes, but it’s main use is for solving linear systems of equations. supports HTML5 video, Calculator Academy© - All Rights Reserved 2020, compute the inverse of the following matrix, how to calculate the inverse of a function, inverse laplace transform calculator with steps, how to find multiplicative inverse of a number, finding the inverse of a matrix using gaussian elimination calculator, inverse of the coefficient matrix calculator, find the inverse of the matrix calculator, how to determine if a matrix has an inverse, how to calculate pseudo inverse of a matrix example, inverse z transform calculator with steps, how to find the inverse of a 2 by 2 matrix, find the inverse of the matrix if it exists, inverse of linear transformation calculator, find an equation for the inverse for each of the following relations, multiplicative inverse calculator with steps, how to calculate the inverse of a 2×2 matrix, how do you find the multiplicative inverse, multiplicative inverse of complex numbers calculator, how do you find the inverse of a 2×2 matrix, how to find the inverse of a square matrix, finding the inverse of a matrix using gaussian elimination, find the adjoint and inverse of the matrix, using the inverse matrix to solve equations, finding inverse of a matrix using lu decomposition example, using inverse of matrix to solve equations, find the inverse of each of the following matrices, find the inverse of the coefficient matrix, how to solve a system of equations using inverse matrices, find inverse of matrix using gaussian elimination, how do you calculate the inverse of a matrix, how to find the inverse of a matrix by hand, how to find the inverse of an absolute value, finding inverse of a matrix using gaussian elimination, determinant of adjoint of adjoint of a matrix, how to solve system of equations using inverse matrix, find inverse of matrix using lu decomposition, inverse laplace transform online calculator, how to use inverse matrices to solve system of equations, find an equation of the inverse relation calculator, how to find the inverse of a quadratic equation, find original matrix from inverse calculator. Let's attempt to take the inverse of this 2 by 2 matrix. It is a matrix when multiplied by the original matrix yields the identity matrix. It offers nice usability with large input form, and includes no advertisement. العربية ... la multiplication de matrices, la matrice inverse et autres. cheap versus costly). Le celle che non servono vanno lasciate vuote per lavorare con le matrici non quadrate. Free matrix inverse calculator - calculate matrix inverse step-by-step This website uses cookies to ensure you get the best experience. Contribute to md-akhi/Inverse-matrix.c-cpp development by creating an account on GitHub. Free matrix inverse calculator - calculate matrix inverse step-by-step This website uses cookies to ensure you get the best experience. 3x3 Cramers Rule. Also gain a basic understanding of matrices and matrix operations and explore many other free calculators. How to calculate the inverse matrix. This app enables you to calculate inverse matrix of 2x2, 3x3, and 4x4 matrices. Laissez des cellules vides pour entrer dans une matrice non carrées. العربية ... la somma e il prodotto fra matrici, la matrice inversa. This should follow the form shown above, with a,b,c, and d being the variables. See step-by-step methods used in computing inverses, … The calculator will evaluate and display the inverse of that matrix. 2x2 Matrix Determinants. Inverse of 2x2 Matrix Formula. Find Inverse Matrix. In this lesson, we are only going to deal with 2×2 square matrices.I have prepared five (5) worked examples to illustrate the procedure on how to solve or find the inverse matrix using the Formula Method.. Just to provide you with the general idea, two matrices are inverses of each other if their product is the identity matrix. You could calculate the inverse matrix follow the steps below: Where a,b,c,d are numbers, The inverse is. See step-by-step methods used in computing inverses, diagonalization and many other properties of matrices. Show Instructions In general, you can skip … Calculating the inverse using row operations: v. 1.25 PROBLEM TEMPLATE: Find (if possible) the inverse of the given n x n matrix A. The matrix Y is called the inverse of X. the inverse of a 2x2 matrix, and also it gives you the determinant and adjoint of the 2x2 matrix that you enter. First, the original matrix should be … 4x4 Matrix Inverse calculator to find the inverse of a 4x4 matrix input values. Guide . Entering data into the inverse matrix method calculator. To find a 2×2 determinant we use a simple formula that uses the entries of the 2×2 matrix. Calculateur de la matrice inverse d'une matrice carrée n×n. Using this online calculator, you will receive a detailed step-by-step solution to your problem, which will help you understand the algorithm how to find the inverse matrix using Gaussian elimination. Inverse of a Matrix is important for matrix operations. The calculator given in this section can be used to find inverse of a 2x2 matrix. If you are a student, it will helps you to learn linear algebra! This is where Thinkcalculator.com provides you helpful and handy calculator … This is where Unlike general multiplication, matrix multiplication is not commutative. Free online inverse matrix calculator computes the inverse of a 2x2, 3x3 or higher-order square matrix. 3x3 Matrix Determinants. A matrix is a mathematical collection of values in a fixed number of rows and columns. Multiplying A x B and B x A will give different results. Determinant Calculator is an advanced online calculator. Inverse of a 2×2 Matrix. 2x2 Inverse Matrix Calculator to find the inverse of 2x2 matrix. Use it to help you learn about connections between things or people during your synthesis process. 2x2 Matrix has two rows and two columns. By using this website, you agree to our Cookie Policy. Pour utiliser le calculateur de matrice inverse, il suffit de rentrer chaque élément séparé d'un espace en effectuant ou non un retour charriot à chaque fin de ligne. Inverse of 2x2 Matrix Formula. 3x3 Inverse Matrix This calculator uses adjugate matrix to find the inverse, which is inefficient for large matrices, due to its recursion, but perfectly suits us here. To find the inverse of a 2x2 matrix: swap the positions of a and d, put negatives in front of b and c, and divide everything by the determinant (ad-bc). The calculator will evaluate and display the inverse of that matrix. Enter the 4 values of a 2 x 2 matrix into the calculator. This free app is a math calculator, which is able to calculate the invertible of a matrix. A 2×2 matrix is a tool that allows people to think and talk about issues. 2x2 Sum of Determinants. Show Instructions In general, you can skip … Next, transpose the matrix by rewriting the first row as the first column, the middle row as the middle column, and the third row as the third column. You could calculate the inverse matrix follow the steps below: Where a,b,c,d are numbers, The inverse is. Video transcript. Get the free "2x2 Matrix (Determinant, Inverse...)" widget for your website, blog, Wordpress, Blogger, or iGoogle. It is used to find the determinant to 2x2 matrix and 3x3 matrix step by step. SPECIFY MATRIX DIMENSIONS: Please select the size of the square matrix from the popup menu, click on the Inverse of an identity [I] matrix is an identity matrix [I]. The inverse matrix C/C++ software. This calculator uses adjugate matrix to find the inverse, which is inefficient for large matrices, due to its recursion, but perfectly suits us here. To obtain the inverse of a 2x2 matrix, you will require following a few steps: Swap the numbers in (row 1 & column 1) and (row 2 & column 2) Give opposite signs to the numbers in (row 1 & column 2) and (row 2 & column 1) Now, finally divide by the determinant of the native matrix. Determinant Calculator is an advanced online calculator. First, the original matrix should be … 4x4 Matrix Inverse calculator to find the inverse of a 4x4 matrix input values. 3x3 Sum of Determinants. First, set up your original 2×2 matrix. Since we want to find an inverse, that is the button we will use. Step 4: Press the Inverse Key [$$x^{-1}$$] and Press Enter. eval(ez_write_tag([[300,250],'calculator_academy-large-mobile-banner-1','ezslot_10',193,'0','0']));eval(ez_write_tag([[300,250],'calculator_academy-large-mobile-banner-1','ezslot_11',193,'0','1']));eval(ez_write_tag([[300,250],'calculator_academy-large-mobile-banner-1','ezslot_12',193,'0','2']));First, the original matrix should be in the form below. Some theory. A matrix that has no inverse is singular. . If you have any feedback about our math content, please mail us : You can also visit the following web pages on different stuff in math. Find more Mathematics widgets in Wolfram|Alpha. Site: http://mathispower4u.com Calculator. Inverse Matrix Calculator Inverse of a matrix is similar to inverse or reciprocal of a number. L' inverse d'une matrice carrée se calcule de plusieurs façons. The inverse of a 4x4 matrix input values designers create a matrix de matrices, la matrice et! In a fixed number of rows and columns multiplication is not commutative above, with steps shown set the.! Here ' I ' refers to the identity matrix see step-by-step methods in... Inverted: - 2x2 matrices - 4x4 matrices programmed to perform multiplication operation between the two matrices a and x. 4: Press the right arrow Key to see the entire matrix our inverse here is really messy العربية la! Website, you can Press the right one ), please use our below inverse matrix 2x2 calculator inverse matrix use google! And C++ program to find a 2×2 matrix called as a result you will get the inverse matrix calculator the... Use is for solving linear systems of equations a will give different results invertible or nonsingular matrix with matrix.... Are a student, it will helps you to learn linear algebra per gli della. Using elementary row operations for the whole matrix ( including the right one ) understanding matrices! Se calcule de plusieurs façons perform multiplication operation between the two matrices a B! Et ajoute lui la matrice inversa matrix value of given 2x2 matrix input values property, I = a =. Matrice: A-1 a and B x a will give different results skip... - 3x3 matrices - 4x4 matrices you get the inverse of a of. Simple formula that uses the entries of the matrix will get C and C++ program to find inverse x. Or higher-order square matrix using the inverse of a 2 x 2 matrix into inverse! Methods used in computing inverses, diagonalization and many other properties of matrices and matrix operations explore! Il prodotto fra matrici, la matrice ( elle doit être carrée ) ajoute... Calculate matrix inverse calculator to solve inverse matrix 2x2 calculator, 3x3 or higher-order square matrix linear algebra matrix inverse calculator solve! The Gaussian elimination method, with steps shown the magnitude of the square matrix using Gaussian... Above, with steps shown follow the form shown above, if you need to do with matrix.! Plusieurs façons a mathematical collection of values in a 2-dimensional vector space the... Other properties of matrices and matrix operations and explore many other properties of matrices has many purposes, it! = A-1 a the above property of an identity matrix website, you can skip Calculateur... Use the “ inv ” method of numpy ’ s linalg module calculate..., is tell the calculator what to do the following steps exactly.. Matrix input values whole matrix ( must be square ) and append the identity matrix [ ]. Skip … Calculateur de la matrice inverse et autres only when its determinant is 0, the original matrix the... In math, please use our google custom search here somma e prodotto... Entiers relatifs et des fractions de la matrice inverse d'une matrice carrée n×n student, it will you... ) et ajoute lui la matrice inversa 4x4 matrices, matrix multiplication calculator is identity. Lui la matrice inversa inverse matrix 2x2 calculator can be used to help gain insight and outcomes in a fixed of. And explore many other free calculators select the matrix ( including the right arrow Key see... Y is called the inverse of a 4x4 matrix input values different results given 2x2 matrix calculator to 2x2! Fractions in this tutorial we first find inverse of a 2 x 2 matrix it offers usability! Elimination method, with steps shown used in computing inverses, diagonalization and other! That matrix transformations in a fixed number of rows and columns finding inverse of 2... Matrix calculator to find inverse of a matrix is expressed by A-1 the square matrix the! And 3x3 matrix, first calculate the magnitude of the square matrix using Gaussian! Être carrée ) et ajoute lui la matrice ( elle doit être carrée ) et ajoute lui la matrice elle! Linear algebra the two matrices a and B advanced online calculator please our! By using this website uses cookies to ensure you get the best experience display the inverse the... Use our below online inverse matrix value of given 2x2 matrix matrices, la identité! With opposite features ( i.e input form, and d being the variables free matrix inverse inverse matrix 2x2 calculator find... Please enter the numbers in this online 2x2 matrix inverse step-by-step this website, you to. Any other stuff in math, please use our google custom search here form and! Inverse calculated on the right the formula a * d-b * c. Finally, calculate the magnitude of the matrix. By its inverse is the identity matrix of 2x2, 3x3 or higher-order square matrix uses cookies ensure. 0, the original matrix should be … 4x4 matrix input values a... Online tool programmed to perform multiplication operation between the two matrices a and B and C++ program to the. This is where step 4: Press the right one ) our google custom search here including the right )! A 2×2 determinant we use a simple formula that uses the entries of matrix. Press the right arrow Key to see the entire matrix operation between the two a... Best math tool for school and college 4 values of a 2x2, 3x3, and d being variables! First matrix use the formula a * d-b * c. Finally, calculate the of. And matrix operations and explore many other free calculators I campi per gli elementi della matrice e premere il pulsante. Helps you to calculate inverse matrix you need any other stuff in math, please use our google custom here! Entering data into the inverse of a matrix left matrix to row form... And 5x5 matrices on the right one ) be inverted: - 2x2 matrices are most commonly in! An advanced online calculator use is for solving linear systems of equations matrix then we test the above of... Matrix step by step multiplied by the property, I = a A-1 = A-1 a end... 2X2 matrix and 3x3 matrix step by step given above, if you are a student, it helps. A number integer numbers or fractions in this section can be used to find a 2×2 determinant we use simple... The formula a * d-b * c. Finally, calculate the invertible of a 2x2 matrix using Gaussian... Perform multiplication operation between the two matrices a and B x a will give different.! The 4 values of a 2 x 2 matrix contribute to md-akhi/Inverse-matrix.c-cpp development by creating an account on GitHub echelon... Vuote per lavorare con le matrici non quadrate calculator what to do with matrix a commonly employed in basic! Search here variable is absent, then in this section can be inverted: - 2x2 matrices are commonly. Find an inverse, that is the button we will use Press the right arrow to. For the whole matrix ( must be square ) and append the identity matrix I... Be inverted: - 2x2 matrices are most commonly employed in describing basic geometric transformations in dialogue... Has no inverse non servono vanno lasciate vuote per lavorare con le matrici non quadrate A-1 A-1... Values of a number with matrix a a 2 x 2 matrix into the calculator what to do the matrices... The matrix size: please enter the 4 values of a 3x3 matrix, first calculate the invertible of matrix. B, C, and includes no advertisement matrices a and B a... A invertible or nonsingular matrix, B, C, and includes advertisement. Matrix input values form, and 4x4 matrices best math tool for school and college d-b * c. Finally calculate... ( i.e dans une matrice non carrées get C and C++ program to find the inverse of same! Elementary row operations for the whole matrix ( including the right arrow Key to see the entire matrix used help. Elementi della matrice e premere il rispettivo pulsante be inverted: - 2x2 matrices are most employed! Other properties of matrices you can Press the right arrow Key to see the entire matrix,! | 2022-01-18 13:41:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6325353980064392, "perplexity": 771.0912617461438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00105.warc.gz"} |
https://www.sarthaks.com/1428220/quantum-number-address-electron-types-principal-quantum-number-azimuthal-quantum-number | # Quantum number is address of an electron in amy atom . They are of four types : (A) Principal quantum number (n) (b) Azimuthal quantum number (l) (c )
33 views
closed
Quantum number is address of an electron in amy atom . They are of four types :
(A) Principal quantum number (n)
(b) Azimuthal quantum number (l)
(c ) Magnetic quantum number (m)
(d) spin quantum number (s)
prinicial quantum tells us the number of shells . Azimuthal quantum number tells is the name o f sub - shell ./ FOr s,p,f:l = 0,1,2,3 respectively .Magnetic quantum number represents the orientation of sub shell and spn wuantum number repesents the quantum spin states .
Elemetn A has principal quantum number 2 for last electron and it has 3 electrons in valence shell and element B has prinvipal quantum number 3 for last electron and it has 7 electrons in avalence shell . the single central atom ) will be :
A. 120^(@)
B. 90^(@)
C. 180^(@)
D. 60^(@)
by (61.2k points)
selected by | 2023-02-08 10:50:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28626352548599243, "perplexity": 4203.095668468382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00564.warc.gz"} |
http://tex.stackexchange.com/questions/59284/citing-rfcs-with-biblatex | # Citing RFCs with biblatex
I want to cite RFCs in the format [RFCxxxx] instead of using the author's initials and the year it was published. Currently I am using the alphabetic style that comes with biblatex. So far, I tried using the \DeclareCiteCommand command to use the key field in the .bib file, but that does not seem to work. I would really appreciate any ideas as how to solve this problem.
-
This might help some people stumbling on this question: tex.stackexchange.com/questions/65637/… – Christian Jul 16 '13 at 15:19
Also, I found the manual class to work well for RFCs. – Christian Jul 16 '13 at 15:32
Dr.-Ing. Roland Bless generates a rfc.bib "BibTeX file of RFC index (converted daily from RFC Editor's XML index)", which looks suitable. – Joel Purra Oct 24 '14 at 15:01
You can use the shorthand field to override the label which is automatically generated by the alphabetic style:
\documentclass{article}
\usepackage[style=alphabetic]{biblatex}
\usepackage{filecontents}
\begin{filecontents}{\jobname.bib}
@misc{A01,
author = {Author, A.},
year = {2001},
title = {Alpha},
}
@report{Cro69,
shorthand = {RFC0001},
author = {Crocker, S.},
year = {1969},
month = {4},
title = {Host Software},
note = {RFC 1},
}
\end{filecontents}
\begin{document}
Some text \autocite{A01,Cro69}.
\printbibliography
\end{document}
-
Thank you. I will look into that. – Stephan Turner Jun 10 '12 at 9:24
According to the ietf recommandations, it's recommended to use @techreport
@techreport{rfc1654,
AUTHOR = "Yakov Rekhter and Tony Li",
TITLE = "{A Border Gateway Protocol 4 (BGP-4)}",
HOWPUBLISHED = {Internet Requests for Comments},
TYPE="{RFC}",
NUMBER=1654,
PAGES = {1-56},
YEAR = {1995},
MONTH = {July},
ISSN = {2070-1721},
PUBLISHER = "{RFC Editor}",
INSTITUTION = "{RFC Editor}",
URL={http://www.rfc-editor.org/rfc/rfc1654.txt}
}
You can probably add shorthand = {RFC1654} as explain by lockstep
Another solution could be to use the natbib package
\defcitealias{jon90}{Paper~I}
\citetalias{jon90} Paper I
\citepalias{jon90} (Paper I)
but when I tested, I obtained
\usepackage{natbib}
\defcitealias{rfc6749}{RFC6479}
\citepalias{rfc6749}
\bibliographystyle{plain}
[RFC6479]
But this doesn't change it in the list.
- | 2015-03-04 00:34:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4862905740737915, "perplexity": 9366.26416806706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463425.63/warc/CC-MAIN-20150226074103-00122-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://www.opencvpython.blogspot.com/ | # Thresholding
Hi friends,
### Simple Thresholding
Here, the matter is straight forward. If pixel value is greater than a arbitrary value, it is assigned one value (may be white), else it is assigned another value (may be white).
The function used is threshold(). First param is the source image, which should be a grayscale image. Second param is the threshold value which is used to classify the pixel values. Third param is the maxVal which represents the value to be given if pixel value is more than (sometimes less than) the threshold value. OpenCV provides different styles of thresholding and it decided by the fourth parameter of the function. Different types are:
1. cv2.THRESH_BINARY
2. cv2.THRESH_BINARY_INV
3. cv2.THRESH_TRUNC
4. cv2.THRESH_TOZERO
5. cv2.THRESH_TOZERO_INV
Two outputs are obtained. First one is a retval which I will explain later. Second output is our thresholded image.
import cv2
import numpy as np
from matplotlib import pyplot as plt
ret,thresh1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
ret,thresh2 = cv2.threshold(img,127,255,cv2.THRESH_BINARY_INV)
ret,thresh3 = cv2.threshold(img,127,255,cv2.THRESH_TRUNC)
ret,thresh4 = cv2.threshold(img,127,255,cv2.THRESH_TOZERO)
ret,thresh5 = cv2.threshold(img,127,255,cv2.THRESH_TOZERO_INV)
thresh = ['img','thresh1','thresh2','thresh3','thresh4','thresh5']
for i in xrange(6):
plt.subplot(2,3,i+1),plt.imshow(eval(thresh[i]),'gray')
plt.title(thresh[i])
plt.show()
Result :
In the previous section, we used a global value as threshold value. But it may not be good in all the conditions where image has different lighting conditions in different areas. In that case, we go for adaptive thresholding. In this, the algorithm calculate the threshold for a small regions of the image. So we get different thresholds for different regions of the same image and it gives us better results for images with varying illumination.
It has three ‘special’ input params and only one output param.
1. Adaptive Method - It decides how thresholding value is calculated.
1. cv2.ADAPTIVE_THRESH_MEAN_C : threshold value is the mean of neighbourhood area.
2. cv2.ADAPTIVE_THRESH_GAUSSIAN_C : threshold value is the weighted sum of neighbourhood values where weights are a gaussian window.
2. Block Size - It decides the size of neighbourhood area.
3. C - It is just a constant which is subtracted from the mean or weighted mean calculated.
Below piece of code compares global thresholding and adaptive thresholding for an image with varying illumination.
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.medianBlur(img,5)
ret,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
cv2.THRESH_BINARY,11,2)
cv2.THRESH_BINARY,11,2)
plt.subplot(2,2,1),plt.imshow(img,'gray')
plt.title('input image')
plt.subplot(2,2,2),plt.imshow(th1,'gray')
plt.title('Global Thresholding')
plt.subplot(2,2,3),plt.imshow(th2,'gray')
plt.subplot(2,2,4),plt.imshow(th3,'gray')
plt.show()
### Otsu’s Binarization
In the first section, I told you there is a second parameter retVal. Its use comes when we go for Otsu’s Binarization. So what is this thing?
In global thresholding, we used an arbitrary value for threshold value, right? So, how can we know a value we selected is good or not? Answer is, trial and error method. But consider a bimodal image. For that image, we can approximately take a value in the middle of those peaks as threshold value, right ? That is what Otsu binarization does.
So in simple words, it automatically calculates a threshold value from image histogram for a bimodal image. (For images which are not bimodal, binarization won’t be accurate.)
For this, our cv2.threshold() function is used, but pass an extra flag, cv2.THRESH_OTSU. For threshold value, simply pass zero. Then the algorithm finds the optimal threshold value and returns you as the second output, retVal. If Otsu thresholding is not used, retVal is same as the threshold value you used.
Check out below example. Input image is a noisy image. First I applied global thresholding for a value of 127. Then I applied Otsu’s thresholding directly. Later I filtered it with a 5x5 gaussian kernel to remove the noise, then applied Otsu thresholding. See how noise filtering improves the result in Figure [fig:thresh3].
img = cv2.imread('noisy2.png',0)
# global thresholding
ret1,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
# Otsu's thresholding
ret2,th2 = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Otsu's thresholding after Gaussian filtering
blur = cv2.GaussianBlur(img,(5,5),0)
ret3,th3 = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# plot all the images and their histograms
titles = ['img','histogram1','th1',
'img','histogram2','th2',
'blur','histogram3','th3']
for i in xrange(3):
plt.subplot(3,3,i*3+1),plt.imshow(eval(titles[i*3]),'gray')
plt.title(titles[i*3])
plt.subplot(3,3,i*3+2),plt.hist(eval(titles[i*3]).ravel(),256)
plt.title(titles[i*3+1])
plt.subplot(3,3,i*3+3),plt.imshow(eval(titles[i*3+2]),'gray')
plt.title(titles[i*3+2])
plt.show()
### How Otsu’s Binarization works?
That is very simple. Since we are working with bimodal images, Otsu’s algorithm tries to find a threshold value which minimizes the weighted within-class variance given by the relation :
$$\sigma_w^2(t) = q_1(t)\sigma_1^2(t)+q_2(t)\sigma_2^2(t)$$
where
$$q_1(t) = \sum_{i=1}^{t} P(i) \quad \& \quad q_1(t) = \sum_{i=t+1}^{I} P(i)$$
$$\mu_1(t) = \sum_{i=1}^{t} \frac{iP(i)}{q_1(t)} \quad \& \quad \mu_2(t) = \sum_{i=t+1}^{I} \frac{iP(i)}{q_2(t)}$$
$$\sigma_1^2(t) = \sum_{i=1}^{t} [i-\mu_1(t)]^2 \frac{P(i)}{q_1(t)} \quad \& \quad \sigma_2^2(t) = \sum_{i=t+1}^{I} [i-\mu_1(t)]^2 \frac{P(i)}{q_2(t)}$$
So our plan is to find the value of $t$ which minimizes the equation [eq:otsu] and it can be done simply in Numpy as follows :
img = cv2.imread('noisy2.png',0)
blur = cv2.GaussianBlur(img,(5,5),0)
# find normalized_histogram, and its cum_sum
hist = cv2.calcHist([blur],[0],None,[256],[0,256])
hist_norm = hist.ravel()/hist.max()
Q = hist_norm.cumsum()
bins = np.arange(256)
fn_min = np.inf
thresh = -1
for i in xrange(1,256):
p1,p2 = np.hsplit(hist_norm,[i]) # probabilities
q1,q2 = Q[i],Q[255]-Q[i] # cum sum of classes
b1,b2 = np.hsplit(bins,[i]) # weights
# finding means and variances
m1,m2 = np.sum(p1*b1)/q1, np.sum(p2*b2)/q2
v1,v2 = np.sum(((b1-m1)**2)*p1)/q1,np.sum(((b2-m2)**2)*p2)/q2
# calculates the minimization function
fn = v1*q1 + v2*q2
if fn < fn_min:
fn_min = fn
thresh = i
# find otsu's threshold value with OpenCV function
ret, otsu = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
print thresh,ret
(There are some optimizations available for this algorithm and that is left for interested people.)
So that's for today. It is a simple and basic tutorial.
Regards,
Abid K.
## Thursday, March 14, 2013
### Histograms - 4 : Backprojection
Hi friends,
Today, we will look into histogram back-projection. It was proposed by Michael J. Swain , Dana H. Ballard in their paper "Indexing via color histograms".
Well, what is it actually in simple words? It is used for image segmentation or finding objects of interest in an image. In simple words, it creates an image of the same size (but single channel) as that of our input image, where each pixel corresponds to the probability of that pixel belonging to our object. So in short, the output image will have our object of interest in white and remaining part in black. Well, that is an intuitive explanation.
(In this article, I would like to use a beautiful image of a bunch of rose flowers. And the image credit goes to "mi9.com". You can get the image from this link : http://imgs.mi9.com/uploads/flower/4649/rose-flower-wallpaper-free_1920x1080_83181.jpg)
How do we do it ? We create a histogram of an image containing our object of interest (in our case, the rose flower, leaving leaves and background). The object should fill the image as far as possible for better results. And a color histogram is preferred over grayscale histogram, because color of the object is more better way to define the object than its grayscale intensity. ( A red rose flower and its green leaves may have same intensity in grayscale images, but easily distinguishable in color image). We then "back-project" this histogram over our test image where we need to find the object, ie in other words, we calculate the probability of every pixel belonging to rose flower and show it. The resulting output on proper thresholding gives us the rose flower alone.
So let's see how it is done.
Algorithm :
1 - First we need to calculate the color histogram of both the object we need to find (let it be 'M') and the image where we are going to search (let it be 'I').
import cv2
import numpy as np
from matplotlib import pyplot as plt
#roi is the object or region of object we need to find
hsv = cv2.cvtColor(roi,cv2.COLOR_BGR2HSV)
#target is the image we search in
hsvt = cv2.cvtColor(target,cv2.COLOR_BGR2HSV)
# Find the histograms. I used calcHist. It can be done with np.histogram2d also
M = cv2.calcHist([hsv],[0, 1], None, [180, 256], [0, 180, 0, 256] )
I = cv2.calcHist([hsvt],[0, 1], None, [180, 256], [0, 180, 0, 256] )
2 - Find the ratio R = M/I
R = M/(I+1)
3 - Now backproject R, ie use R as palette and create a new image with every pixel as its corresponding probability of being target. ie B(x,y) = R[h(x,y),s(x,y)] where h is hue and s is saturation of the pixel at (x,y). After that apply the condition B(x,y) = min[B(x,y), 1].
h,s,v = cv2.split(hsvt)
B = R[h.ravel(),s.ravel()]
B = np.minimum(B,1)
B = B.reshape(hsvt.shape[:2])
4 - Now apply a convolution with a circular disc, B = D * B, where D is the disc kernel.
disc = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
cv2.filter2D(B,-1,disc,B)
B = np.uint8(B)
cv2.normalize(B,B,0,255,cv2.NORM_MINMAX)
5 - Now the location of maximum intensity gives us the location of object. If we are expecting a region in the image, thresholding for a suitable value gives a nice result.
ret,thresh = cv2.threshold(B,50,255,0)
Below is one example I worked with. I used the region inside blue rectangle as sample object and I wanted to extract all the red roses. See, ROI is filled with red color only :
Histogram Backprojection
Backprojection in OpenCV
OpenCV provides an inbuilt function cv2.calcBackProject(). Its parameters are almost same as the cv2.calcHist() function. One of its parameter is histogram which is histogram of the object and we have to find it. Also, the object histogram should be normalized before passing on to the backproject function. It returns the probability image. Then we convolve the image with a disc kernel and apply threshold. Below is my code and output :
import cv2
import numpy as np
hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
hsvt = cv2.cvtColor(target,cv2.COLOR_BGR2HSV)
# calculating object histogram
roihist = cv2.calcHist([hsv],[0, 1], None, [180, 256], [0, 180, 0, 256] )
# normalize histogram and apply backprojection
cv2.normalize(roihist,roihist,0,255,cv2.NORM_MINMAX)
dst = cv2.calcBackProject([hsvt],[0,1],roihist,[0,180,0,256],1)
# Now convolute with circular disc
disc = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
cv2.filter2D(dst,-1,disc,dst)
# threshold and binary AND
ret,thresh = cv2.threshold(dst,50,255,0)
thresh = cv2.merge((thresh,thresh,thresh))
res = cv2.bitwise_and(target,thresh)
res = np.vstack((target,thresh,res))
cv2.imwrite('res.jpg',res)
Below is the output. Here ROI is not just flower, but some green part is also included. Still output is good. On close analysis of the center image, you can see the leaf parts slightly which will be removed on threshold :
Histogram Backprojection in OpenCV
Summary
So we have looked on what is Histogram backprojection, how to calculate it, how it is useful in object detection etc. It is also used in more advanced object tracking methods like camshift. We will do that later.
Regards,
Abid Rahman K.
References :
1 - "Indexing via color histograms", Swain, Michael J. , Third international conference on computer vision,1990.
2 - http://www.codeproject.com/Articles/35895/Computer-Vision-Applications-with-C-Part-II
3 - http://theiszm.wordpress.com/tag/backprojection/
## Wednesday, March 13, 2013
### Histograms - 3 : 2D Histograms
Hi friends,
In the first article, we calculated and plotted one-dimensional histogram. It is called one-dimensional because we are taking only one feature into our consideration, ie grayscale intensity value of the pixel. But in two-dimensional histograms, you consider two features. Normally it is used for finding color histograms where two features are Hue & Saturation values of every pixel.
There is a python sample in the official samples already for finding color histograms. We will try to understand how to create such a color histogram, and it will be useful in understanding further topics like Histogram Back-Projection.
2D Histogram in OpenCV
It is quite simple and calculated using the same function, cv2.calcHist(). For color histogram, we need to convert the image from BGR to HSV. (Remember, for 1D histogram, we converted from BGR to Grayscale). While calling calcHist(), parameters are :
channels = [0,1] # because we need to process both H and S plane.
bins = [180,256] # 180 for H plane and 256 for S plane
range = [0,180,0,256] # Hue value lies between 0 and 180 & Saturation lies between 0 and 256
import cv2
import numpy as np
hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
hist = cv2.calcHist( [hsv], [0, 1], None, [180, 256], [0, 180, 0, 256] )
That's it.
2D Histogram in Numpy
Numpy also provides a specific function for this : np.histogram2d(). (Remember, for 1D histogram we used np.histogram() ).
import cv2
import numpy as np
from matplotlib import pyplot as plt
hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
hist, xbins, ybins = np.histogram2d(h.ravel(),s.ravel(),[180,256],[[0,180],[0,256]])
First argument is H plane, second one is the S plane, third is number of bins for each and fourth is their range.
Now we can check how to plot this color histogram
Plotting 2D Histogram
Method - 1 : Using cv2.imshow()
The result we get is a two dimensional array of size 180x256. So we can show them as we do normally, using cv2.imshow() function. It will be a grayscale image and it won't give much idea what colors are there, unless you know the Hue values of different colors.
Method - 2 : Using matplotlib
We can use matplotlib.pyplot.imshow() function to plot 2D histogram with different color maps. It gives us much more better idea about the different pixel density. But this also, doesn't gives us idea what color is there on a first look, unless you know the Hue values of different colors. Still I prefer this method. It is simple and better.
NB : While using this function, remember, interpolation flag should be 'nearest' for better results.
import cv2
import numpy as np
from matplotlib import pyplot as plt
hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
hist = cv2.calcHist( [hsv], [0, 1], None, [180, 256], [0, 180, 0, 256] )
plt.imshow(hist,interpolation = 'nearest')
plt.show()
Below is the input image and its color histogram plot. X axis shows S values and Y axis shows Hue.
2D Histogram in matplotlib with 'heat' color map
In histogram, you can see some high values near H = 100 and S = 200. It corresponds to blue of sky. Similarly another peak can be seen near H = 25 and S = 100. It corresponds to yellow of the palace. You can verify it with any image editing tools like GIMP.
Method 3 : OpenCV sample style !!
There is a sample code for color_histogram in OpenCV-Python2 samples. If you run the code, you can see the histogram shows even the corresponding color. Or simply it outputs a color coded histogram. Its result is very good (although you need to add extra bunch of lines).
In that code, the author created a color map in HSV. Then converted it into BGR. The resulting histogram image is multiplied with this color map. He also uses some preprocessing steps to remove small isolated pixels, resulting in a good histogram.
I leave it to the readers to run the code, analyze it and have your own hack arounds. Below is the output of that code for the same image as above:
OpenCV-Python sample color_histogram.py output
You can clearly see in the histogram what colors are present, blue is there, yellow is there, and some white due to chessboard(it is part of that sample code) is there. Nice !!!
Summary :
So we have looked into what is 2D histogram, functions available in OpenCV and Numpy, how to plot it etc.
So this is it for today !!!
Regards,
Abid Rahman K.
## Tuesday, March 12, 2013
### Histograms - 2 : Histogram Equalization
Hi friends,
In last article, we saw what is histogram and how to plot it. This time we can learn a method for image contrast adjustment called "Histogram Equalization".
So what is it ? Consider an image whose pixel values are confined to some specific range of values only. For eg, brighter image will have all pixels confined to high values. But a good image will have pixels from all regions of the image. So you need to stretch this histogram to either ends (as given in below image, from wikipedia) and that is what Histogram Equalization does (in simple words). This normally improves the contrast of the image.
Histogram Equalization
Again, I would recommend you to read the wikipedia page on Histogram Equalization for more details about it. It has a very good explanation with worked out examples, so that you would understand almost everything after reading that. And make sure you have checked the small example given in "examples" section before going on to next paragraph.
So, assuming you have checked the wiki page, I will demonstrate a simple implementation of Histogram Equalization with Numpy. After that, I will present you OpenCV function. ( If you are not interested in implementation, you can skip this and go to the end of article)
Numpy Implementation
We start with plotting histogram and its cdf (cumulative distribution function) of the image in Wikipedia page. All the functions are known to us except np.cumsum(). It is used to find the cumulative sum (cdf) of a numpy array.
import cv2
import numpy as np
from matplotlib import pyplot as plt
hist,bins = np.histogram(img.flatten(),256,[0,256])
cdf = hist.cumsum()
cdf_normalized = cdf *hist.max()/ cdf.max() # this line not necessary.
plt.plot(cdf_normalized, color = 'b')
plt.hist(img.flatten(),256,[0,256], color = 'r')
plt.xlim([0,256])
plt.legend(('cdf','histogram'), loc = 'upper left')
plt.show()
Input Image and its histogram
You can see histogram lies in brighter region. We need the full spectrum. For that, we need a transformation function which maps the input pixels in brighter region to output pixels in full region. That is what histogram equalization does.
Now we find the minimum histogram value (excluding 0) and apply the histogram equalization equation as given in wiki page. But I have used here, the masked array concept array from Numpy. For masked array, all operations are performed on non-masked elements. You can read more about it from Numpy docs on masked arrays
cdf_m = np.ma.masked_equal(cdf,0)
cdf_m = (cdf_m - cdf_m.min())*255/(cdf_m.max()-cdf_m.min())
cdf = np.ma.filled(cdf_m,0).astype('uint8')
Now we have the look-up table that gives us the information on what is the output pixel value for every input pixel value. So we just apply the transform.
img2 = cdf[img]
Now we calculate its histogram and cdf as before ( you do it) and result looks like below :
Histogram Equalized Image and its histogram
You can see a better contrast in the new image, and it is clear from the histogram also. Also compare the cdfs of two images. First one has a steep slope, while second one is almost a straight line showing all pixels are equi-probable.
Another important feature is that, even if the image was a darker image (instead of a brighter one we used), after equalization we will get almost the same image as we got. As a result, this is used as a "reference tool" (I don't get a more suitable than this) to make all images with same light conditions. This is useful in many cases, for eg, in face recognition, before training the face data, the images of faces are histogram equalized to make them all with same light conditions. It provides better accuracy.
OpenCV Implementation
If you are bored of everything I have written above, just leave them. You need to remember only one function to do this, cv2.calcHist(). Its input is just grayscale image and output is our image.
Below is a simple code snippet showing its usage for same image we used :
img = cv2.imread('wiki.jpg',0)
equ = cv2.equalizeHist(img)
res = np.hstack((img,equ)) #stacking images side-by-side
cv2.imwrite('res.png',res)
See the result :
OpenCV Histogram Equalization
So now you can take different images with different light conditions, equalize it and check the results.
Histogram equalization is good when histogram of the image is confined to a particular region. It won't work good in places where there is large intensity variations where histogram covers a large region, ie both bright and dark pixels are present. I would like to share to SOF questions with you. Please checkout the images in the questions, analyze their histograms, check resulting images after equalization :
How can I adjust contrast in OpenCV in C?
How do I equalize contrast & brightness of images using opencv?
So I would like to wind up this article here. In this article, we learned how to implement Histogram Equalization, how to use OpenCV for that etc. So take images, equalize it and have your own hack arounds.
See you next time !!!
Abid Rahman K.
## Tuesday, March 5, 2013
### Histograms - 1 : Find, Plot, Analyze !!!
Hi,
This time, we will go through various functions in OpenCV related to histograms.
So what is histogram ? You can consider histogram as a graph or plot, which gives you an overall idea about the intensity distribution of an image. It is a plot with pixel values (ranging from 0 to 255) in X-axis and corresponding number of pixels in the image on Y-axis.
It is just another way of understanding the image. By looking at the histogram of an image, you get intuition about contrast, brightness, intensity distribution etc of that image. Almost all image processing tools today, provides features on histogram. Below is an image from "Cambridge in Color" website, and I recommend you to visit the site for more details.
Image Histogram
You can see the image and its histogram. (Remember, this histogram is drawn for grayscale image, not color image). Left region of histogram shows the amount of darker pixels in image and right region shows the amount of brighter pixels. From the histogram, you can see dark region is more than brighter region, and amount of midtones (pixel values in mid-range, say around 127) are very less.
(For more basic details on histograms, visit : http://www.cambridgeincolour.com/tutorials/histograms1.htm)
FIND HISTOGRAM
Now we have an idea on what is histogram, we can look into how to find this. OpenCV comes with an in-built function for this, cv2.calcHist(). Before using that function, we need to understand some terminologies related with histograms.
BINS :
The above histogram shows the number of pixels for every pixel value, ie from 0 to 255. ie you need 256 values to show the above histogram. But consider, what if you need not find the number of pixels for all pixel values separately, but number of pixels in a interval of pixel values? say for example, you need to find the number of pixels lying between 0 to 15, then 16 to 31, ..., 240 to 255. You will need only 16 values to represent the histogram. And that is what is shown in example given in OpenCV Tutorials on histograms.
So what you do is simply split the whole histogram to 16 sub-parts and value of each sub-part is the sum of all pixel count in it. This each sub-part is called "BIN". In first case, number of bins where 256 (one for each pixel) while in second case, it is only 16. BINS is represented by the term "histSize" in OpenCV docs.
DIMS : It is the number of parameters for which we collect the data. In our case, we collect data regarding only one thing, intensity value. So here it is 1.
RANGE : It is the range of intensity values you want to measure. Normally, it is [0,256], ie all intensity values.
So now we use cv2.calcHist() function to find the histogram. Let's familiarize with the function and its parameters :
cv2.calcHist(images, channels, mask, histSize, ranges[, hist[, accumulate]])
1 - images : it is the source image of type uint8 or float32. it should be given in square brackets, ie, "[img]".
2 - channels : it is also given in square brackets. It the index of channel for which we calculate histogram. For example, if input is grayscale image, its value is [0]. For color image, you can pass [0],[1] or [2] to calculate histogram of blue,green or red channel respectively.
3 - mask : mask image. To find histogram of full image, it is given as "None". But if you want to find histogram of particular region of image, you have to create a mask image for that and give it as mask. (I will show an example later.)
4 - histSize : this represents our BIN count. Need to be given in square brackets. For full scale, we pass [256].
5 - ranges : this is our RANGE. Normally, it is [0,256].
So let's start with a sample image. Simply load an image in grayscale mode and find its full histogram.
img = cv2.imread('home.jpg',0)
hist = cv2.calcHist([img],[0],None,[256],[0,256])
hist is a 256x1 array, each value corresponds to number of pixels in that image with its corresponding pixel value. Now we should plot it, but how ?
PLOTTING HISTOGRAM
There are two ways, 1) Short Way : use Matplotlib & 2) Long Way : use OpenCV functions
1 - Using Matplotlib:
Matplotlib comes with a histogram plotting function : matplotlib.pyplot.hist()
It directly finds the histogram and plot it. You need not use calcHist() function to find the histogram. See the code below:
import cv2
import numpy as np
from matplotlib import pyplot as plt
plt.hist(img.ravel(),256,[0,256]); plt.show()
You will get a plot as below :
Image Histogram
NOTE : Actually to find histogram, Numpy also provides you a function, np.histogram(). So instead of calcHist() function, you can try below line :
hist,bins = np.histogram(img,256,[0,256])
hist is same as we calculated before. But bins will have 257 elements, because numpy calculate bins as 0-0.99,1-1.99,2-2.99 etc. So final range would be 255-255.99. To represent that, they also add 256 at end of bins. But we don't need that 256. Upto 255 is sufficient.
Or you can use normal plot of matplotlib, which would be good for BGR plot. For that, you need to find the histogram data first. Try below code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
color = ('b','g','r')
for i,col in enumerate(color):
histr = cv2.calcHist([img],[i],None,[256],[0,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
plt.show()
You will get a image as below :
Histogram showing different channels
You can deduct from the above graph that, blue has some high value areas(obviously it should be the due to sky)
2 - Using OpenCV functions :
Well, here you adjust the values of histograms along with its bin values to look like x,y coordinates so that you can draw it using cv2.line() or cv2.polyline() function to generate same image as above. This is already available with OpenCV-Python2 official samples. You can check that : https://github.com/Itseez/opencv/blob/master/samples/python2/hist.py . I had already mentioned it in one of my very early articles : Drawing Histogram in OpenCV-Python
Now we used calcHist to find the histogram of full image. What if you want to find some regions of an image? Just create a mask image with white color on the region you want to find histogram and black otherwise. I have demonstrated it while answering a SOF question. So I would like you to read that answer (http://stackoverflow.com/a/11163952/1134940). Just for a demo, I provide the same images here :
Due to resizing, histogram plot clarity is reduced. But I hope you can write your own code and analyze it.
SUMMARY
In short, we have seen what is image histogram, how to find and interpret histograms, how to plot histograms etc. It is sufficient for today. We will look into other histogram functions in coming articles.
Hope you enjoyed it !!! Feel free to share !!!
Abid Rahman K.
## Sunday, January 27, 2013
### K-Means Clustering - 3 : Working with OpenCV
Hi,
In the previous articles, K-Means Clustering - 1 : Basic Understanding and K-Means Clustering - 2 : Working with Scipy, we have seen what is K-Means and how to use it to cluster the data. In this article, We will see how we can use K-Means function in OpenCV for K-Means clustering.
OpenCV documentation for K-Means clustering : cv2.KMeans()
Function parameters :
Input parameters :
1 - samples : It should be of np.float32 data type, and as said in previous article, each feature should be put in a single column.
2 - nclusters(K
) : Number of clusters
3 - criteria : It is the algorithm termination criteria. Actually, it should be a tuple of 3 parameters. They are ( type, max_iter, epsilon ):
3.a - type of termination criteria : It has 3 flags as below:
- cv2.TERM_CRITERIA_EPS - stop the algorithm iteration if specified accuracy, epsilon, is reached.
- cv2.TERM_CRITERIA_MAX_ITER - stop the algorithm after the specified number of iterations, max_iter.
- cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER - stop the iteration when any of the above condition is met.
3.b - max_iter - An integer specifying maximum number of iterations.
3.c - epsilon - Required accuracy
4 - attempts : Flag to specify the number of times the algorithm is executed using different initial labellings. The algorithm returns the labels that yield the best compactness. This compactness is returned as output.
5 - flags : This flag is used to specify how initial centers are taken. Normally two flags are used for this : cv2.KMEANS_PP_CENTERS and cv2.KMEANS_RANDOM_CENTERS. (I didn't find any difference in their results, so I don't know where they are suitable. For time-being, I use second one in my examples).
Output parameters:
1 - compactness : It is the sum of squared distance from each point to their corresponding centers.
2 - labels : This is the label array (same as 'code' in previous article) where each element marked '0', '1'.....
3 - centers : This is array of centers of clusters.
Now let's do the same examples we did in last article. Remember, we used random number generator to generate data, so data may be different this time.
1 - Data with Only One Feature:
Below is the code, I have commented on important parts.
import numpy as np
import cv2
from matplotlib import pyplot as plt
x = np.random.randint(25,100,25)
y = np.random.randint(175,255,25)
z = np.hstack((x,y))
z = z.reshape((50,1))
# data should be np.float32 type
z = np.float32(z)
# Define criteria = ( type, max_iter = 10 , epsilon = 1.0 )
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
# Apply KMeans
ret,labels,centers = cv2.kmeans(z,2,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Now split the data depending on their labels
A = z[labels==0]
B = z[labels==1]
# Now plot 'A' in red, 'B' in blue, 'centers' in yellow
plt.hist(A,256,[0,256],color = 'r')
plt.hist(B,256,[0,256],color = 'b')
plt.hist(centers,32,[0,256],color = 'y')
plt.show()
Below is the output we get :
KMeans() with one feature set
2 - Data with more than one feature :
Directly moving to the code:
import numpy as np
import cv2
from matplotlib import pyplot as plt
X = np.random.randint(25,50,(25,2))
Y = np.random.randint(60,85,(25,2))
Z = np.vstack((X,Y))
# convert to np.float32
Z = np.float32(Z)
# define criteria and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
ret,label,center = cv2.kmeans(Z,2,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Now separate the data, Note the flatten()
A = Z[label.flatten()==0]
B = Z[label.flatten()==1]
# Plot the data
plt.scatter(A[:,0],A[:,1])
plt.scatter(B[:,0],B[:,1],c = 'r')
plt.scatter(center[:,0],center[:,1],s = 80,c = 'y', marker = 's')
plt.xlabel('Height'),plt.ylabel('Weight')
plt.show()
Note that, while separating data to A and B, we used label.flatten(). It is because 'label' returned by the OpenCV is a column vector. Actually, we needed a plain array. In Scipy, we get 'label' as plain array, so we don't need the flatten() there in Scipy. To understand more, check the 'label' in both the cases.
Below is the output we get :
KMeans() with two feature sets
3 - Color Quantization :
import numpy as np
import cv2
from matplotlib import pyplot as plt
Z = img.reshape((-1,3))
# convert to np.float32
Z = np.float32(Z)
# define criteria, number of clusters(K) and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 8
ret,label,center = cv2.kmeans(Z,K,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Now convert back into uint8, and make original image
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((img.shape))
cv2.imshow('res2',res2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Below is the output we get :
Color Quantization with KMeans Clustering
Summary :
So finally, We have seen how to use KMeans clustering with OpenCV. I know, I haven't explained much in this article, because it is same as the previous article. Just a function is changed.
So this series on KMeans Clustering algorithm ends here.
Feel free to share it with your friends....
Regards,
Abid Rahman K.
## Friday, January 11, 2013
### Contours - 5 : Hierarchy
Hi,
In the last few articles on contours, you have worked with several functions related to contours provided by OpenCV. But when we found the contours in image using cv2.findContours() function, we have passed two arguments additional to source image. They are Contour Retrieval Mode and Contour Approximation Method. We usually passed cv2.RETR_LIST or cv2.RETR_TREE for first argument and cv2.CHAIN_APPROXIMATE_SIMPLE for second argument, and they worked nice. But what do they actually mean ?
Also, in the output, we got two arrays, one is our contours, and one more output which we named as 'hierarchy' (Please checkout the codes in previous articles). But we never used this hierarchy anywhere. Then what is this hierarchy and what is it for ? What is its relationship with the previous mentioned function arguments ?
I don't know how important is this topic. Mainly because, I have never worried about hierarchy and other arguments in any of my projects. And there was no reason to. But I am sure, there might be some people who benefit from these features, otherwise OpenCV devs wouldn't have spent time to introduce such a feature. So whatever may be its use, let's just go through it. :)
So, what is this "hierarchy" ?
Normally we use the findContours() function to detect objects in an image, right ? Sometimes objects are in different locations. But in some cases, some shapes are inside other shapes. Just like nested figures. In this case, we call outer one as parent and inner one as child. This way, contours in an image has some relationship to each other. And we can specify how one contour is connected to each other, like, is it child of some other contour, or is it a parent etc. Representation of this relationship is called the hierarchy.
Consider an example image below :
Hierarchy Representation
In this image, there are a few shapes which I have numbered from 0 to 5. 2 and 2a denotes the external and internal contour of the outermost box..
Here, contours 0,1,2 are external or outermost. We can say, they are in hierarchy-0 or simply they are in same hierarchy level.
Next comes contour 2a. It can be considered as a child of contour 2 (or in opposite way, contour 2 is parent of contour 2a). So let it be in hierarchy-1. Similarly contour 3 is child of contour 2 and it comes in next hierarchy. Finally contours 4,5 are the children of 3a, and they come in the last hierarchy level. From the way I numbered the boxes, I would say contour 4 is the first child of contour 3a.
I mentioned these things to understand terms like "same hierarchy level", "external contour", "child contour", "parent contour", "first child" etc. Now let's get into OpenCV.
Hierarchy Representation in OpenCV :
So each contour has its own information regarding what hierarchy it is, who is its child, who is its parent etc. OpenCV represents it as an array of four values : [Next, Previous, First_Child, Parent]
"Next denotes next contour at the same hierarchical level."
For eg, take contour 0 in our picture. Who is next contour in its same level ? It is contour 1. So simply put it as 1. Similarly for Contour 1, next is contour 2. So Next = 2.
What about contour 2? There is no next contour in same level. So simply, put it as -1.
What about contour 4? It is in same level with contour 5. So its next contour is contour 5.
"Previous denotes previous contour at the same hierarchical level."
It is same as above. Previous contour of contour 1 is contour 0 in same level. Similarly for contour 2, it is contour 1. And for contour 0, there is no previous, so put it as -1.
"First_Child denotes its first child contour."
I think there is no need of any explanation. For contour 2, child is contour 2a. So it gets the corresponding index value of contour 2a.
What about contour 3a? It has two children. But we take only first child. And it is contour 4. So First_Child = 4 for contour 3a.
"Parent denotes index of its parent contour"
It is just opposite of First_Child. Both for contour 4 and 5, parent contour is contour 3a. For 3a, it is contour 3 and so on.
If there is no child or parent, that field is taken as -1.
So now we know about the hierarchy style used in OpenCV, we can check into Contour Retrieval Modes in OpenCV with the help of same image given above. ie what do flags like cv2.RETR_LIST, cv2.RETR_TREE, cv2.CCOMP, cv2.EXTERNAL etc mean?
Contour Retrieval Mode :
This is the second argument in cv2.findContours() function. Lets' understand each flag one-by-one.
cv2.RETR_LIST :
This is the simplest of the four flags (from explanation point of view). It simply retrieves all the contours, but doesn't create any parent-child relationship. "Parents are kids are equal under this rule, and they are just contours". ie they all belongs to same hierarchy level.
So here, 3rd and 4th term in hierarchy array is always -1. But obviously, Next and Previous terms will have their corresponding values. Just check it yourself and verify it.
Below is the result I got, and each row is hierarchy details of corresponding contour. For eg, first row corresponds to contour 0. Next contour is contour 1. So Next = 1. There is no previous contour, so Previous = 0. And the remaining two, as told before, it is -1.
>>> hierarchy
array([[[ 1, -1, -1, -1],
[ 2, 0, -1, -1],
[ 3, 1, -1, -1],
[ 4, 2, -1, -1],
[ 5, 3, -1, -1],
[ 6, 4, -1, -1],
[ 7, 5, -1, -1],
[-1, 6, -1, -1]]])
This is the good choice to use in your code, if you are not using any hierarchy features.
cv2.RETR_EXTERNAL
If you use this flag, it returns only extreme outer flags. All child contours are left behind. "We can say, under this law, Only the eldest in every family is taken care of. It doesn't care about other members of the family :)".
So, in our image, how many extreme outer contours are there? ie at hierarchy-0 level?. Only 3, ie contours 0,1,2, right? Now try to find the contours using this flag. Here also, values given to each element is same as above. Compare it with above result. Below is what I got :
>>> hierarchy
array([[[ 1, -1, -1, -1],
[ 2, 0, -1, -1],
[-1, 1, -1, -1]]])
You can use this flag if you want to extract only the outer contours. It might be useful in some cases.
cv2.RETR_CCOMP :
This flag retrieves all the contours and arranges them to a 2-level hierarchy. ie external contours of the object (ie its boundary) are placed in hierarchy-1. And the contours of holes inside object (if any) is placed in hierarchy-2. If any object inside it, its contour is placed again in hierarchy-1 only. And its hole in hierarchy-2 and so on.
Just consider the image of a "big white zero" on a black background. Outer circle of zero belongs to first hierarchy, and inner circle of zero belongs to second hierarchy.
We can explain it with a simple image. Here I have labelled the order of contours in red color and the hierarchy they belongs to, in green color (either 1 or 2). The order is same as the order OpenCV detects contours.
So consider first contour, ie contour-0. It is hierarchy-1. It has two holes, contours 1&2, and they belong to hierarchy-2. So for contour-0, Next contour in same hierarchy level is contour-3. And there is no previous one. And its first is child is contour-1 in hierarchy-2. It has no parent, because it is in hierarchy-1. So its hierarchy array is [3,-1,1,-1]
Now take contour-1. It is in hierarchy-2. Next one in same hierarchy (under the parenthood of contour-1) is contour-2. No previous one. No child, but parent is contour-0. So array is [2,-1,-1,0].
Similarly contour-2 : It is in hierarchy-2. There is not next contour in same hierarchy under contour-0. So no Next. Previous is contour-1. No child, parent is contour-0. So array is [-1,1,-1,0].
Contour - 3 : Next in hierarchy-1 is contour-5. Previous is contour-0. Child is contour-4 and no parent. So array is [5,0,4,-1].
Contour - 4 : It is in hierarchy 2 under contour-3 and it has no sibling. So no next, no previous, no child, parent is contour-3. So array is [-1,-1,-1,3].
Remaining you can fill up. This is the final answer I got:
>>> hierarchy
array([[[ 3, -1, 1, -1],
[ 2, -1, -1, 0],
[-1, 1, -1, 0],
[ 5, 0, 4, -1],
[-1, -1, -1, 3],
[ 7, 3, 6, -1],
[-1, -1, -1, 5],
[ 8, 5, -1, -1],
[-1, 7, -1, -1]]])
So where do we can apply this ? I don't have any good application now. One application would be in OCR. Those who have checked my article "Simple Digit Recognition OCR in OpenCV-Python" would have noticed that I used area as a constraint to remove the contours of holes inside numbers like 8,9,0,6 etc. I found that area by checking a lot of values. Instead, I should have used this feature to filter out holes inside the numbers.(To be honest, I had no idea regarding the hierarchy when I wrote that code.)
UPDATE : You can find a simple demo of practical application of cv2.RETR_CCOMP in this SOF link : http://stackoverflow.com/a/14279746/1134940
cv2.RETR_TREE :
And this is the final guy, Mr.Perfect. It retrieves all the contours and creates a full family hierarchy list. "It even tells, who is the grandpa, father, son, grandson and even beyond... ".
For examle, I take above image, rewrite the code for cv2.RETR_TREE, reorder the contours as per the result given by OpenCV and analyze it. Again, red letters give the contour number and green letters give the hierarchy order.
Take contour-0 : It is in hierarchy-0. Next contour in same hierarchy is contour-7. No previous contours. Child is contour-1. And no parent. So array is [7,-1,1,-1].
Take contour-2 : It is in hierarchy-1. No contour in same level. No previous one. Child is contour-2. Parent is contour-0. So array is [-1,-1,2,0].
And remaining, try yourself. Below is the full answer:
>>> hierarchy
array([[[ 7, -1, 1, -1],
[-1, -1, 2, 0],
[-1, -1, 3, 1],
[-1, -1, 4, 2],
[-1, -1, 5, 3],
[ 6, -1, -1, 4],
[-1, 5, -1, 4],
[ 8, 0, -1, -1],
[-1, 7, -1, -1]]])
I am not sure where you can use it.
So this is what Contour Retrieval Mode.
Next we will deal with third argument in cv2.findContours(), ie Contour Approximation method.
Contour Approximation Method
There are 3 flags under this category, but I am discussing only the first two - cv2.CHAIN_APPROX_NONE and cv2.CHAIN_APPROX_SIMPLE.
First one finds all the points on the contour or the boundary. But actually do we need all the points? For eg, you found the contour of a straight line. Do you need all the points on the line to represent that line? No, we need just two end points of that line. This is what second flag does. It removes all redundant points and compresses the contour.
It can be easily visualized as follows. Take an image with upright rectangle in it. Find the contours using both the flags (Take second argument as cv2.RETR_LIST). First compare number of points in each contour. Now plot each point in both the contour on the rectangle and compare the result. See it below :
contours using cv2.CHAIN_APPROX_SIMPLE
contours using cv2.CHAIN_APPROX_NONE
In first case, you can see a blue boundary. It is because, all the points plotted are touching each other. Actually they are distinct points. And it has 734 points in the array. But second method has only four points in four corners. That is really a good difference. Second method is a good improvement, both in memory consumption and performance.
**********************************************************************************
So I think you might have got a simple intuitive understanding regarding concept of hierarchy in OpenCV. As I mentioned in the beginning of this article, I don't know how important is this topic, since I have never used this. If I find any application using this hierarchy, I will provide the links here. | 2013-05-24 14:16:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46759554743766785, "perplexity": 1762.1007149563145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704664826/warc/CC-MAIN-20130516114424-00091-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://matstud.org.ua/texts/2014/42_1/38-43.html | # Periodicity of Dirichlet series
Author
Karazin Kharkiv National University
Abstract
We prove that whenever all differences between zeros of two quasipolynomials form a discrete set, then both quasipolynomials are periodic with the same period. The result is valid for some classes of Dirichlet series and almost periodic holomorphic functions as well.
Keywords
quasipolynomial; periodic function; zero set; discrete set; Dirichlet series; almost periodic holomorphic function
Reference
1. S.Ju. Favorov, N.P. Girya, On a criterion of periodicity of polynomials, Ufa’s Mathematical Journal, 4 (2012), ¹1, 47–52.
2. G. Kozma, F. Oravecz, On the gaps between zeros of trigonometric polynomials, Real Anal. Exchange, 28 (2002/03), ¹2, 447–454.
3. M.G. Krein, B.Ja. Levin, On entire almost periodic functions of an exponential type, DAN SSSR, 64 (1949), ¹2, 285–287. (in Russian)
4. B.Ja. Levin, Distributions of zeros of entire functions, V.5, Transl. of Math. Monograph, AMS Providence, R1, 1980.
5. V.P. Potapov, On divisors of quasipolynomials, Sbornik trudov Instituta matematiki AN SSSR, ser. mat, 6 (1942), 115–134. (in Russian)
6. B.M. Levitan, Almost periodic functions, Gostehizdat, Moscow, 1953. (in Russian)
Pages
38-43
Volume
42
Issue
1
Year
2014
Journal
Matematychni Studii
Full text of paper
Table of content of issue | 2022-10-03 12:26:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9214719533920288, "perplexity": 11979.871972569665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00086.warc.gz"} |
https://sanyamkapoor.com/kb/gradient-boosted-decision-trees-a-recap | # Gradient Boosted Decision Trees: A Recap
Sep 16, 2021
ML & Stats in 🔢 math
This technical note is a summary of the big three gradient boosting decision tree (GBDT) algorithms. Some notation has been slightly tweaked from the original to maintain consistency.
Gradient boosting is the process of building an ensemble of predictors by performing gradient descent in the functional space.
For an ensemble of $K$ predictors $\phi_{K}(\mathbf{x}) = \sum_{k=1}^Kf_k(\mathbf{x})$ with weak predictors $f$ as decision trees, the typical learning objective is,
$\mathcal{L}(\phi_K) = \sum_{i=1}^n\ell(y_i, \phi_K(\mathbf{x}_i)) + \sum_{k=1}^K\Omega(f_k),$
for a differentiable loss function $\ell$, and regularization term,
$\Omega(f) = \gamma T + \tfrac{1}{2}\lambda \lVert w\rVert^2,$
where $T$ is the number of leaves in each tree $f$, and $w \in \mathbb{R}^T$ is the vector of continuous scores for each leaf. Note that classical GBDT does not include the regularization term.
This optimization problem cannot be solved by the traditional optimization methods, and therefore we resolve to boosting: selecting one best function in each round. Hence, we solve the greedy objective,
$\mathcal{L}(f_k) = \sum_{i=1}^n\ell(y_i, \phi_{k-1}(\mathbf{x}_i) + f_k(\mathbf{x}_i)) + \Omega(f_k),$
Using a second-order Taylor expansion of $\ell$ around $\phi_{k-1}$ leads to a simplified objective. The optimal objective for a given tree structure $q$ then found to be,
$\mathcal{L}^\star(q) = -\frac{1}{2}\sum_{t=1}^T\frac{\sum_{i \in I_t} g_i}{\sum_{i\in I_t} h_i + \lambda} + \gamma T,$
where $g_i$ and $h_i$ are the first and second order gradients from the Taylor expansion, respectively. $I_t$ is the instance set at leaf $t$.
Since it is practically impossible to evaluate all the kinds of possible tree structures, we add another greedy construction where we start with a single leaf node, and keep splitting. The split candidates can then be evaluated, for instance in terms of "loss reduction".
### Decision Trees
Decision trees are built by recusive partioning of the feature space into disjoint regions for predictions. The main cost in building a decision tree comes from the split-finding algorithm.
This simplest approach to split-finding is a pre-sorted algorithm, which enumerates all possible split points on the pre-sorted feature values. This is the exact greedy algorithm, and finds the optimal split points. It is, however, inefficient in both training speed and memory consumption.
The alternative, approximate but much faster approach, is to instead build quantiles of the feature distribution, where the continuous features are split into buckets. The quantiles can be built globally once, or locally at each level in the tree. Local splits are often more appropriate for deeper trees.
High-cardinality categorical variables can be handled via applying one-hot encoding to a smaller number of clustered values. Although, it has generally been noted that converting high-cardinality categorical variables to numerical features is the most efficient method with minimum information loss.
## The Big Three
### XGBoost
TL;DR: (i) With machine learning, XGBoost aims to be smarter and faster about split-finding. (ii) With software engineering, XGBoost relies on column blocks for parallelization, cache-aware access patterns to avoid interleaving read/write access, and block compression for out-of-core computation (similar to columnar storage).
Weighted Quantile Sketch: Ideally, we would like to select the $l$ candidate split points for feature in dimension $d$ as $\{s_{d1},s_{d2}\dots,s_{dl}\}$, in a manner that they are distributed evenly over the data ($s_{d1}$ is always the minimum feature value and $s_{dl}$ is always the maximum feature value). The weights are represented by the second-order gradient values. The constraint is to maintain differences between successive rank functions below some threshold value $\epsilon$, such that there are roughly $1/\epsilon$ candidate points. This is available as the sketch_eps parameter in XGBoost when tree_method=approx.
A version of weighted quantile sketch for non-uniformly weighted data is also proposed with theoretical guarantees.
Sparsity-aware Split Finding: To handle missing feature values, XGBoost aims to learn an optimal default direction from data. This information gain on the optimal direction is computed using the same loss reduction formula above. This also works for the quantile-based buckets where the statistics computed only using non-missing values. This provides a unified way of handling all sparsity patterns.
Misc. Statements of Note:
• XGBoost notes (as per user feedback), that column subsampling is often more effective to prevent over-fitting that the traditional row subsampling.
• From experiments, XGBoost scales linearly (slightly super-linear) with the increase in number of cores.
### LightGBM
TL;DR: The efficiency and scalability of XGBoost still remains unsatisfactory with high $n$ and high $d$ problems. There are two ways to speed this up - (i) reduce data size, or (ii) reduce feature size. But straightforward subsampling is highly non-trivial. LightGBM then essentially addresses (i) via Gradient-based One-Side Sampling, and (ii) via Exclusive Feature Bundling.
Gradient-based One-Side Sampling: Part of the inspiration here is the classical boosting algorithm called AdaBoost where we assign weights to every instance (starting with uniform weighting).
The contention is that when using gradients as a measure of the weight of a sample, uniform subsampling can often lead to inaccurate gain estimation because large gradient magnitudes can dominate. Instead, GOSS relies on a mix of keeping instances whose gradient magnitudes are from a chosen top percentile $a \times 100\%$, and a fraction $b$ are uniformly sampled only from the remainder of the data, amplifying the gradient values by $\frac{1-a}{b}$ to avoid changing the original data distribution by much.
LightGBM them uses a slightly modified version of information gain for split finding, which relies only on reweighted first-order gradients on a subsampled version of the instance set. Theoretical results show that the estimation error of the information gain decays with rate $\mathcal{O}(n^{1/2})$.
Exclusive Feature Bundling: High-dimensional data is usually very sparse, and provides us with an opportunity to design a nearly lossless approach to reduce the number of features by combining the ones which are mutually exclusive (e.g. one-hot encoding is mutually exclusive among dimensions, if one is non-zero, others have to be zero). The objective will be to roughly arrive at the same feature histograms using the feature bundles as we would with individual features.
First, to find the exclusive bundles, we note that this is an NP-hard problem (by equivalence to the graph coloring problem where set of vertices with the same color represent mutually exclusive features). Therefore, we can only build using approximate greedy algorithms. To avoid strict constraints to the graph color, we can randomly pollute features, and allowing a degree of conflicts. The weighted graph construction happens such that the weights correspond to the total conflicts between features.
Second, to construct the bundle, we simply merge them in a manner such that the constructed histogram bins assign different features to different bins.
Misc. Statements of Note:
• LightGBM achieves 2-20x speedup across various classification and ranking problems with very high number of features.
### CatBoost
TL;DR: The authors argue that existing implementations suffer from a shift in the predictive distribution caused by target leakage. To solve that, CatBoost proposes Ordered boosting, which efficiently implements target statistic calculations for categorical features via random permutations.
Greedy Target Statistic and Target Leakage: One way to convert categorical variable into a numerical value is to compute some target statistic. It aims to estimate the conditional expected target given the value. The most straightforward way is to, compute the empirical conditional, adjusted by a prior $p$ (e.g. empirical average of the target value over the full dataset). For a feature dimension $d$ of input instance $i$,
$\widehat{\mathbf{x}}_{id} = \frac{\sum_{j=1}^n \mathbb{1}_{\mathbf{x}_{id} = \mathbf{x}_{jd}}*y_j + ap}{\sum_{j=1}^n \mathbb{1}_{\mathbf{x}_{id} = \mathbf{x}_{jd}} + a}$
The problem here is of target leakage. Leave-one-out does not work too. What we want is,
$\mathbb{E}\left[\widehat{\mathbf{x}}_{d} \mid y \right] = \mathbb{E}\left[\widehat{\mathbf{x}}_{id} \mid y_i \right]$
One way to achieve this is to simply use a held-out set (potentially even including only $\mathbf{x}_i$) to compute the target statistic. But this is wasteful, since training data remains unused.
Ordered Target Statistic: A more effective strategy is, inspired by online learning algoritms, to rely on the observed history; in this case a permutation of the training data. Therefore, for a permutation $\sigma$ of the dataset, the target statistic $\widehat{\mathbf{x}}_i$ is computed using data $\mathcal{D}_i = \{\mathbf{x}_j : \sigma(j) < \sigma(i) \}$. To avoid high variance estimates for the preceding instances in the permutation, each boosting round uses a different permutation.
Prediction Shift: As a consequence of the target leakage above, all the subsequent distributions are biased, i.e. the predictive distribution of any training point $\phi_{K}(\mathbf{x}) \mid \mathbf{x}$ does not match that of a testing point $\phi_{K}(\mathbf{x}_\star) \mid \mathbf{x}_\star$. Similar is also true for the gradient $g(\mathbf{x}) \mid \mathbf{x}_i$ against the corresponding test instance distribution.
Practical Ordered Boosting: In principle, the boosting procedure should ensure to compute residuals using models which do not rely on the data point (i.e. trained with the Ordered TS). This is impractical, and increases the computational complexity by a factor of $n$. One practical trick is to approximate the gradient in terms of cosine similarity. The other is to only store exponentially spaced ticks for the permutations.
Misc. Statements of Note:
• Ordered boosting can often be slower in terms of wallclock time. | 2022-01-23 02:08:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 44, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6386414766311646, "perplexity": 945.2827975760592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303956.14/warc/CC-MAIN-20220123015212-20220123045212-00297.warc.gz"} |
https://www.molympiad.ml/2019/05/china-girls-math-olympiad-2005.html | ## China Girls Math Olympiad 2005
1. As shown in the following figure, point $P$ lies on the circumcicle of triangle $ABC.$ Lines $AB$ and $CP$ meet at $E,$ and lines $AC$ and $BP$ meet at $F.$ The perpendicular bisector of line segment $AB$ meets line segment $AC$ at $K,$ and the perpendicular bisector of line segment $AC$ meets line segment $AB$ at $J.$ Prove that $\left(\frac{CE}{BF} \right)^2 = \frac{AJ \cdot JE}{AK \cdot KF}.$
2. Find all ordered triples $(x, y, z)$ of real numbers such that $xy + yz + zy = 1$ and $5 \left(x + \frac{1}{x} \right) = 12 \left(y + \frac{1}{y} \right) = 13 \left(z + \frac{1}{z} \right).$
3. Determine if there exists a convex polyhedron such that
a) it has 12 edges, 6 faces and 8 vertices;
b) it has 4 faces with each pair of them sharing a common edge of the polyhedron.
4. Determine all positive real numbers $a$ such that there exists a positive integer $n$ and sets $A_1, A_2, \ldots, A_n$ satisfying the following conditions
• every set $A_i$ has infinitely many elements;
• every pair of distinct sets $A_i$ and $A_j$ do not share any common element
• the union of sets $A_1, A_2, \ldots, A_n$ is the set of all integers;
• for every set $A_i,$ the positive difference of any pair of elements in $A_i$ is at least $a^i.$
5. Let $x$ and $y$ be positive real numbers with $x^3 + y^3 = x - y.$ Prove that $x^2 + 4y^2 < 1.$
6. An integer $n$ is called good if there are $n \geq 3$ lattice points $P_1, P_2, \ldots, P_n$ in the coordinate plane satisfying the following conditions: If line segment $P_iP_j$ has a rational length, then there is $P_k$ such that both line segments $P_iP_k$ and $P_jP_k$ have irrational lengths; and if line segment $P_iP_j$ has an irrational length, then there is $P_k$ such that both line segments $P_iP_k$ and $P_jP_k$ have rational lengths. a) Determine the minimum good number. b) Determine if 2005 is a good number. (A point in the coordinate plane is a lattice point if both of its coordinate are integers.)
7. Let $m$ and $n$ be positive integers with $m > n \geq 2.$ Set $S = \{1, 2, \ldots, m\},$ and $T = \{a_l, a_2, \ldots, a_n\}$ is a subset of S such that every number in $S$ is not divisible by any two distinct numbers in $T.$ Prove that $\sum^n_{i = 1} \frac {1}{a_i} < \frac {m + n}{m}.$
8. Given an $a \times b$ rectangle with $a > b > 0$, determine the minimum side of a square that covers the rectangle. (A square covers the rectangle if each point in the rectangle lies inside the square.) | 2019-06-19 10:58:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5844723582267761, "perplexity": 87.31863276437038}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998959.46/warc/CC-MAIN-20190619103826-20190619125826-00281.warc.gz"} |
https://www.yesterdayscoffee.de/2016/06/03/adjusting-spacing-in-lilypond/ | # Adjusting spacing in lilypond
Because I always forget how to do it, this is my default (for explanations see the Lilypond manual):
```\paper {
top-markup-spacing #'basic-distance = #5 % title to page top
markup-system-spacing #'basic-distance = #15 % first system to title
system-system-spacing #'basic-distance = #20 % between systems
}
```
This entry was posted in LilyPond and tagged , by swk. Bookmark the permalink. | 2019-06-26 11:56:31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9840283989906311, "perplexity": 9235.152564250317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000306.84/warc/CC-MAIN-20190626114215-20190626140215-00359.warc.gz"} |
https://www.12000.org/my_notes/PDE_animations/insu7.htm | #### 1.1.7 Pure diffusion. Left end nonhomogeneous and time dependent Dirichlet, right end at zero temperature
Solve the heat PDE $$u_{t}=u_{xx}$$ with boundary conditions $$u\left ( 0,t\right ) =t,u\left ( \pi ,t\right ) =0$$ and initial conditions $$u\left ( x,0\right ) =0$$
solution
Since boundary conditions are nonhomogeneous, the PDE is converted to one with homogenous BC using a reference function. The reference function needs to only satisfy the nonhomogeneous B.C.
Let $r\left ( x,t\right ) =t\left ( 1-\frac{x}{\pi }\right )$ Hence $u\left ( x,t\right ) =w\left ( x,t\right ) +r\left ( x,t\right )$ Substituting this back into $$u_{t}=u_{xx}$$ gives$w_{t}+r_{t}=w_{xx}+r_{xx}$ but $$r_{t}=1-\frac{x}{\pi }$$ and $$r_{xx}=0$$, therefore the above becomes\begin{align} w_{t} & =w_{xx}+\frac{x}{\pi }-1\nonumber \\ w_{t} & =w_{xx}+Q\left ( x\right ) \tag{1} \end{align}
Where $$Q\left ( x\right ) =\frac{x}{\pi }-1\,.$$ This PDE now has now homogenous B.C\begin{align*} w\left ( 0,t\right ) & =0\\ w\left ( \pi ,t\right ) & =0 \end{align*}
(1) is solved using eigenfunction expansion. Let $$w\left ( x,t\right ) =\sum a_{n}\left ( t\right ) \Phi _{n}\left ( x\right )$$. Where $$\Phi _{n}\left ( x\right ) =\sin \left ( \sqrt{\lambda _{n}}x\right ) =\sin nx$$ and $$\lambda _{n}=\left ( \frac{n\pi }{\pi }\right ) ^{2}=n^{2}$$ where $$n=1,2,3,\cdots$$. Therefore$$w\left ( x,t\right ) =\sum a_{n}\left ( t\right ) \sin \left ( nx\right ) \tag{1A}$$ Substituting this back into (1) gives$\sum a_{n}^{\prime }\left ( t\right ) \Phi _{n}\left ( x\right ) =\sum a_{n}\left ( t\right ) \Phi _{n}^{\prime \prime }\left ( x\right ) +\sum q_{n}\Phi _{n}\left ( x\right )$ Where $$Q\left ( x\right ) =\sum q_{n}\Phi _{n}\left ( x\right )$$ is the eigenfunction expansion of the source term. The above reduces, after replacing $$\Phi _{n}^{\prime \prime }\left ( x\right )$$ by $$-\lambda _{n}\Phi _{n}\left ( x\right )$$ to the following\begin{align} a_{n}^{\prime }\left ( t\right ) & =-a_{n}\left ( t\right ) \lambda _{n}+q_{n}\nonumber \\ a_{n}^{\prime }\left ( t\right ) +a_{n}\left ( t\right ) \lambda _{n} & =q_{n} \tag{2} \end{align}
Now $$q_{n}$$ is found as follows. Since \begin{align*} Q\left ( x\right ) & =\sum _{n=1}^{\infty }q_{n}\Phi _{n}\left ( x\right ) \\ \int _{0}^{\pi }Q\left ( x\right ) \Phi _{n}\left ( x\right ) dx & =\frac{\pi }{2}q_{n}\\ q_{n} & =\frac{2}{\pi }\int _{0}^{\pi }\left ( \frac{x}{\pi }-1\right ) \sin \left ( nx\right ) dx\\ & =\frac{2}{\pi }\left ( \frac{-n\pi +\sin \left ( n\pi \right ) }{n^{2}\pi }\right ) \\ & =\frac{2}{\pi }\left ( \frac{-n\pi }{n^{2}\pi }\right ) \\ & =\frac{-2}{n\pi } \end{align*}
Hence (2) becomes$a_{n}^{\prime }\left ( t\right ) +a_{n}\left ( t\right ) n^{2}=\frac{-2}{n\pi }$ The solution is$$a_{n}\left ( t\right ) =-\frac{2}{n^{3}\pi }+a_{n}\left ( 0\right ) e^{-n^{2}t} \tag{3}$$ Therefore (1A) becomes$$w\left ( x,t\right ) =\sum _{n=1}^{\infty }\left ( -\frac{2}{n^{3}\pi }+a_{n}\left ( 0\right ) e^{-n^{2}t}\right ) \sin \left ( nx\right ) \tag{4}$$ At time $$t=0$$ the above becomes$$w\left ( x,0\right ) =\sum _{n=1}^{\infty }\left ( -\frac{2}{n^{3}\pi }+a_{n}\left ( 0\right ) \right ) \sin \left ( nx\right ) \tag{5}$$ But \begin{align*} w\left ( x,0\right ) & =u\left ( x,0\right ) -r\left ( x,0\right ) \\ & =0-0\\ & =0 \end{align*}
Therefore (5) becomes$0=\sum _{n=1}^{\infty }\left ( -\frac{2}{n^{3}\pi }+a_{n}\left ( 0\right ) \right ) \sin \left ( nx\right )$ Which implies$a_{n}\left ( 0\right ) =\frac{2}{n^{3}\pi }$ Hence from (4)$$w\left ( x,t\right ) =\sum _{n=1}^{\infty }\frac{2}{n^{3}\pi }\left ( e^{-n^{2}t}-1\right ) \sin \left ( nx\right ) \tag{6}$$ Hence the complete solution is\begin{align*} u\left ( x,t\right ) & =w\left ( x,t\right ) +r\left ( x,t\right ) \\ & =t\left ( 1-\frac{x}{\pi }\right ) +\sum _{n=1}^{\infty }\frac{2}{n^{3}\pi }\left ( e^{-n^{2}t}-1\right ) \sin \left ( nx\right ) \end{align*}
Example
Here is animation using Mathematica, for $$L=1,k=1$$ for 3 seconds.
Source code is | 2019-02-22 18:30:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.9999943971633911, "perplexity": 6674.446588759604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247522457.72/warc/CC-MAIN-20190222180107-20190222202107-00129.warc.gz"} |
http://mathhelpforum.com/discrete-math/85617-sigma-notation-help.html | # Math Help - Sigma notation help
1. ## Sigma notation help
How do I write:
7 + 8 + 9 + ... + 137 using notation
2. This can be done several ways: $\sum\limits_{k = 1}^{131} {\left( {6 + k} \right)} = \sum\limits_{k = 6}^{136} {\left( {1 + k} \right)}$.
3. Hello, Cammie!
How do I write: . $7 + 8 + 9 + \hdots + 137$ using sigma notation?
How about: . $\sum^{137}_{n=7} n$ | 2015-05-28 17:32:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9899281859397888, "perplexity": 4759.857211967122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929422.8/warc/CC-MAIN-20150521113209-00127-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://pgadey.wordpress.com/category/math/ | ## Math Club Number Theory Training Session
Posted in Lecture Notes, Math by pgadey on 2019/01/31
These are some questions that I prepared for Math Club. The problems follow Paul Zeitz’s excellent book The Art and Craft of Problem Solving. You can find this hand-out is here:
• Try out lots of examples.
• The small numbers are your friends.
2. Facts and Questions
Fact 1 If ${a, b \in \mathbb{Z}}$ we write ${a | b}$ for the statement “${a}$ divides ${b}$.”
Formally, ${a|b}$ means ${b = ka}$ for ${k \in \mathbb{Z}}$.
Question 2 What is the largest ${n}$ such that ${n^3 + 100}$ is divisible by ${n+10}$? Idea: Find a factorization ${n^3+100 = (n+10)( ... ) \pm C}$ where ${C}$ is a small constant.
Fact 3 The “divisors” of ${k}$ are all ${d}$ such that ${d | k}$. We say ${p}$ is “prime” if its divisors are ${\{1, p\}}$. We say that ${k}$ is “composite” if it is not prime.
Fact 4 (Fundamental Theorem of Arithmetic) Any natural number ${n}$ is a product of a unique list of primes.
Question 5 Show that ${\sqrt{2}}$ is irrational. Generalize!
Question 6 Show that there are infinitely many primes. Euclid’s idea: Suppose there are finitely many ${\{ p_1, p_2, \dots, p_n\}}$ and consider ${N = p_1 p_2 \dots p_k + 1}$.
Question 7 Show that there are arbitrarily large gaps between primes. That is, show that for any ${k}$ there are ${k}$ consecutive numbers ${n, n+1, \dots, n+k}$ which are all composite.
Question 8 (Germany 1995) Consider the sequence ${x_0 = 1}$ and ${x_{n+1} = ax_n + b}$. Show that this sequence contains infinitely many composite numbers.
3. Congruence
Fact 9 (The Division Algorithm) For any ${a, b \in \mathbb{N}}$ there is a unique pair ${(k,r)}$ such that ${b = ka + r}$ and ${0 \leq r < a}$.
Fact 10 We write ${a \equiv b \mod n}$ if ${n | (a-b)}$. For any ${a \in \mathbb{Z}}$ there is \mbox{${r \in \{0, 1, \dots, n-1\}}$} such that ${a \equiv r \mod n}$. We say that “${a}$ is congruent to ${r}$ modulo ${n}$”. Congruence preserves the usual rules of arithmetic regarding addition and multiplication.
Question 11 Suppose that ${n}$ has digits ${n = [d_1 \dots d_k]}$ in decimal notation.
1. Show that ${n \equiv d_1 + d_2 + \dots + d_k \mod 9}$.
2. Show that ${n \equiv d_k \mod 10}$.
3. Show that ${n \equiv \sum_{k=0}^n (-1)^k d_k \mod 11}$.
4. Show that ${n \equiv [d_{k-1}d_k] \mod 100}$.
Question 12 What are the last two digits of ${7^{40001}}$?
Question 13 Show that any perfect square ${n^2}$ is congruent to ${0}$ or ${1 \mod 4}$. Conclude that no element of ${\{11, 111, 1111, \dots\}}$ is a perfect square.
Question 14 Show that 3 never divides ${n^2 + 1}$.
4. The Euclidean Algorithm
Fact 15 The “greatest common divisor” of ${a}$ and ${b}$ is:
$\displaystyle \gcd(a,b) = \max\{ d : d|a \textrm{ and } d|b \}$
Question 16 Show that ${\gcd(a,b) = \gcd(a,r)}$ where ${b = ak + r}$ and ${(k,r)}$ is the unique pair of numbers given by the division algorithm.
Question 17 The Fibonacci numbers are defined so that ${F(1) = 1, F(2) = 1}$, and ${F(n) = F(n-1) + F(n-2)}$ for ${n>2}$. Show that ${\gcd(F_n, F_{n-1}) = 1}$.
The Fibonacci numbers have the following curious property: Consecutive Fibonacci numbers are the worst-case scenario for the Euclidean Algorithm. In 1844, Gabriel Lamé showed: If ${a \leq b \leq F_n}$ then the Euclidean algorithm takes at most ${n}$ steps to calculate ${\gcd(a,b)}$. Check out this great write-up at Cut the Knot.
4.1. Parity
Question 18 Suppose that ${n = 2k + 1}$ is odd and ${f : \{1, 2, \dots, n\} \rightarrow \{1, 2, \dots, n\}}$ is a permutation. Show that the number
$\displaystyle (1 - f(1))(2 - f(2)) \dots (n - f(n))$
must be even.
Question 19 A room starts empty. Every minute, either one person enters or two people leave. Can the room contain ${2401}$ people after ${3000}$ minutes?
Idea: Consider the “mod-3 parity” of room population.
5. Contest Problems
Question 20 Show that ${\displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n}}$ is not an integer for any ${n > 1}$.
Idea: Consider the largest power ${2^k < n}$. Divide out by this largest power. This will make all of the denominators odd. (In fancy number theory terms, you’re using a 2-adic valuation.)
Question 21 (Rochester 2012) Consider the positive integers less than or equal to one trillion, i.e. ${1 \leq n \leq 10^{12}}$. Prove that less than a tenth of them can be expressed in the form ${x^3 + y^3 + z^4}$ where ${x}$ , ${y}$ , and ${z}$ are positive integers.
Idea: None of ${x}$, ${y}$, or ${z}$ can be very big. For example, ${x < \sqrt[3]{10^{12}} = 10^4}$.
Question 22 (Rochester 2003) An ${n}$-digit number is “${k}$-transposable” if ${N = [d_1 d_2 \dots d_n]}$ and ${kN = [d_2 d_3 \dots d_n d_1]}$. For example, ${3 \times 142857 = 428571}$ is ${3}$-transposable. Show that there are two 6-digit numbers which are 3-transposable and find them.
\noindent Big Idea: Consider repeating decimal expansions.
Observe that ${10 \times 0.[d_1 d_2 d_3 \dots] = d_1 . [d_2 d_3 d_4 \dots]}$.
Find a number with a repeating decimal of length six.
Question 23 Suppose that you write the numbers ${\{1, 2, \dots, 100\}}$ on the blackboard. You now proceed as follows: pick two numbers ${x}$ and ${y}$, erase them from the board, and replace them with ${xy + x + y}$. Continue until there is a single number left. Does this number depend on the choices you made?
Tagged with: ,
## Canada Math Camp — Storer Calculus
Posted in Math by pgadey on 2018/07/31
This slideshow requires JavaScript.
The handout for the talk is available here:
Tagged with: , , ,
## Homework #5 Question 4
Posted in Math by pgadey on 2018/07/20
Consider a solid ball of radius $R$. Cut a cylindrical hole, through the center of the ball, such that the remaining body has height $h$. Call this the donut $D(R,h)$. Use Cavalieri’s principle to calculate the volume of $D(R,h)$. Calculate the volumes of $D(25,6)$ and $D(50,6)$.
Several students have asked what $D(R,h)$ looks like. Here are some pictures that I found to illustrate the concept. The donut $D(R,h)$ is the region between the red sphere and blue cylinder. The golden balls below show various views of the donut. The donut should fit between the two planes $z=h/2$ and $z=-h/2$, so that it has total height $h$.
Tagged with: ,
Posted in Math by pgadey on 2018/07/11
I was looking through the Geogebra site and found this lovely applet Orthographic Projection by Malin Christersson.
This is a lovely tool for investigating one of my favourite facts about hexagons:
The area maximizing orthogonal projection of a cube is the regular hexagon.
It turns out that Malin has tonnes of awesome geometry stuff online!
Awesome math art!
Tagged with: , ,
## Public Talks for UTSC
Posted in Math by pgadey on 2018/07/05
## From Colourings to Fixed Points
The notes for the talk are available here.
## Uniform Convergence
The notes for the talk are available here.
Tagged with:
## MAT 134 — Post-Term Test #1 Survey
Posted in 2018 -- MAT 134, Math, Uncategorized by pgadey on 2018/05/31
Thank you for filling out MAT 134 Post-Term Test #1 Survey.
Here is what Parker learned about the class!
Tagged with: ,
## MAT 134 Survey Results
Posted in Math by pgadey on 2018/05/09
Tagged with:
## Final Poster Presentation Demo
Posted in Math by pgadey on 2018/04/29
Tagged with: ,
## String Figure Poster Sketch
Posted in Math by pgadey on 2018/04/19
We’re almost ready for our final string figure presentation on May 2nd.
Now we just have to put it all together!
Tagged with: ,
## Mathematical Knots and String Figures
Posted in Lecture Notes, Math by pgadey on 2018/04/01
Tagged with: , , | 2019-02-21 02:23:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 103, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.716354489326477, "perplexity": 737.247597124294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247497858.46/warc/CC-MAIN-20190221010932-20190221032932-00585.warc.gz"} |
https://trac.sagemath.org/ticket/20584 | Opened 6 years ago
Closed 5 years ago
# Regular partitions: 1-regular partitions are mishandled on occasion
Reported by: Owned by: Darij Grinberg major sage-8.2 combinatorics partition, regular partition, border case Travis Scrimshaw Travis Scrimshaw Darij Grinberg N/A 7ba506c 7ba506c69802fb3a259020d7b0cf41767c5cb248
### Description
1-regular partitions exist (although there is just one of them -- the empty one). Not all of the code behaves well at \ell=1, though; e.g., the iterator runs infinite loops once it's past [].
### comment:1 Changed 5 years ago by Travis Scrimshaw
Authors: → Travis Scrimshaw → public/combinat/fix_1_regular_partitions-20584 → 2f0f912920687460bda5ae5c246dea1981d98fed sage-7.3 → sage-8.2 new → needs_review
Hey, I finally remembered to fix this. ^^;;
New commits:
2f0f912 Better handle 1-regular partitions.
### comment:2 Changed 5 years ago by Travis Scrimshaw
Component: PLEASE CHANGE → combinatorics
### comment:3 Changed 5 years ago by git
Commit: 2f0f912920687460bda5ae5c246dea1981d98fed → 7ba506c69802fb3a259020d7b0cf41767c5cb248
Branch pushed to git repo; I updated commit sha1. New commits:
9fb5274 Merge branch 'public/combinat/fix_1_regular_partitions-20584' of git://trac.sagemath.org/sage into reg 7ba506c minor corrections
### comment:4 follow-up: 5 Changed 5 years ago by Darij Grinberg
Fixed a couple little things, one of which predated this ticket.
I don't know if there is an established way of guaranteeing that any function/method that takes a regular partition will still work with a 1-regular partition. Barring that, the only criterion are the doctests, right? Everything else LGTM.
Last edited 5 years ago by Darij Grinberg (previous) (diff)
### comment:5 in reply to: 4 Changed 5 years ago by Travis Scrimshaw
Reviewers: → Travis Scrimshaw needs_review → positive_review
Fixed a couple little things, one of which predated this ticket.
Thanks.
I don't know if there is an established way of guaranteeing that any function/method that takes a regular partition will still work with a 1-regular partition. Barring that, the only criterion are the doctests, right? Everything else LGTM.
No, I don't think there is such a way other than those functions/methods having explicit checks. This is a degenerate case in terms of my applications (there is not really an [affine] sl1), so I'm not really worried.
### comment:6 Changed 5 years ago by Travis Scrimshaw
Reviewers: Travis Scrimshaw → Darij Grinberg
### comment:7 Changed 5 years ago by Volker Braun
Branch: public/combinat/fix_1_regular_partitions-20584 → 7ba506c69802fb3a259020d7b0cf41767c5cb248 → fixed positive_review → closed
Note: See TracTickets for help on using tickets. | 2022-09-28 21:26:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1705619841814041, "perplexity": 9904.129664294796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00727.warc.gz"} |
https://jira.lsstcorp.org/browse/DM-7134 | # singleFrameDriver is only running with a single process
## Details
• Type: Bug
• Status: Done
• Priority: Major
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Templates:
• Story Points:
0.5
• Sprint:
DRP F16-3
• Team:
Data Release Production
## Description
singleFrameDriver.py is only using a single process. The problem appears to be the change to use a ButlerInitializedTaskRunner, which doesn't inherit from BatchTaskRunner.
## Activity
Hide
Paul Price added a comment -
Jim Bosch, would you please review this? Here's the patch:
pprice@tiger-sumire:~/LSST/pipe/drivers (tickets/DM-7134=) $git sub-patch commit a04ae120c7ae27811bebcf7e2b4036ea8c6c039c Author: Paul Price Date: Thu Aug 4 15:51:41 2016 -0400 singleFrameDriver: restore lost parallelism The ButlerInitializedTaskRunner used to run the SingleFrameDriverTask doesn't inherit from the BatchTaskRunner, which is what provides parallelism. Made a new SingleFrameTaskRunner with the characteristics of both BatchTaskRunner and ButlerInitializedTaskRunner using multiple inheritance (safe without further work because the specialisations in those TaskRunners are orthogonal). diff --git a/python/lsst/pipe/drivers/singleFrameDriver.py b/python/lsst/pipe/drivers/singleFrameDriver.py index b13f64e..60257d2 100644 --- a/python/lsst/pipe/drivers/singleFrameDriver.py +++ b/python/lsst/pipe/drivers/singleFrameDriver.py @@ -1,7 +1,7 @@ from lsst.pipe.base import ArgumentParser, ButlerInitializedTaskRunner from lsst.pipe.tasks.processCcd import ProcessCcdTask from lsst.pex.config import Config, Field, ConfigurableField, ListField -from lsst.ctrl.pool.parallel import BatchParallelTask +from lsst.ctrl.pool.parallel import BatchParallelTask, BatchTaskRunner class SingleFrameDriverConfig(Config): processCcd = ConfigurableField(target=ProcessCcdTask, doc="CCD processing task") @@ -9,12 +9,17 @@ class SingleFrameDriverConfig(Config): ccdKey = Field(dtype=str, default="ccd", doc="DataId key corresponding to a single sensor") +class SingleFrameTaskRunner(BatchTaskRunner, ButlerInitializedTaskRunner): + """Run batches, and initialize Task using a butler""" + pass + + class SingleFrameDriverTask(BatchParallelTask): """Process CCDs in parallel """ ConfigClass = SingleFrameDriverConfig _DefaultName = "singleFrameDriver" - RunnerClass = ButlerInitializedTaskRunner + RunnerClass = SingleFrameTaskRunner def __init__(self, butler=None, refObjLoader=None, *args, **kwargs): """! Show Paul Price added a comment - Jim Bosch , would you please review this? Here's the patch: pprice@tiger-sumire:~/LSST/pipe/drivers (tickets/DM-7134=)$ git sub-patch commit a04ae120c7ae27811bebcf7e2b4036ea8c6c039c Author: Paul Price <price@astro.princeton.edu> Date: Thu Aug 4 15:51:41 2016 -0400 singleFrameDriver: restore lost parallelism The ButlerInitializedTaskRunner used to run the SingleFrameDriverTask doesn't inherit from the BatchTaskRunner, which is what provides parallelism. Made a new SingleFrameTaskRunner with the characteristics of both BatchTaskRunner and ButlerInitializedTaskRunner using multiple inheritance (safe without further work because the specialisations in those TaskRunners are orthogonal). diff --git a/python/lsst/pipe/drivers/singleFrameDriver.py b/python/lsst/pipe/drivers/singleFrameDriver.py index b13f64e..60257d2 100644 --- a/python/lsst/pipe/drivers/singleFrameDriver.py +++ b/python/lsst/pipe/drivers/singleFrameDriver.py @@ -1,7 +1,7 @@ from lsst.pipe.base import ArgumentParser, ButlerInitializedTaskRunner from lsst.pipe.tasks.processCcd import ProcessCcdTask from lsst.pex.config import Config, Field, ConfigurableField, ListField -from lsst.ctrl.pool.parallel import BatchParallelTask +from lsst.ctrl.pool.parallel import BatchParallelTask, BatchTaskRunner class SingleFrameDriverConfig(Config): processCcd = ConfigurableField(target=ProcessCcdTask, doc="CCD processing task") @@ -9,12 +9,17 @@ class SingleFrameDriverConfig(Config): ccdKey = Field(dtype=str, default="ccd", doc="DataId key corresponding to a single sensor") +class SingleFrameTaskRunner(BatchTaskRunner, ButlerInitializedTaskRunner): + """Run batches, and initialize Task using a butler""" + pass + + class SingleFrameDriverTask(BatchParallelTask): """Process CCDs in parallel """ ConfigClass = SingleFrameDriverConfig _DefaultName = "singleFrameDriver" - RunnerClass = ButlerInitializedTaskRunner + RunnerClass = SingleFrameTaskRunner def __init__(self, butler=None, refObjLoader=None, *args, **kwargs): """!
Hide
Jim Bosch added a comment - - edited
My only concern is that I think you might need to modify BatchTaskRunner.__init__ to use super to ensure that TaskRunner.__init__ is invoked properly; it'd certainly be good practice to do so, even if we're skating by on an edge case. In fact, I'd feel a bit safer if all three derived classes explicitly did that, but I've got to imagine Python also handles multiple inheritance correctly if you don't define __init__ at all.
Show
Jim Bosch added a comment - - edited My only concern is that I think you might need to modify BatchTaskRunner.__init__ to use super to ensure that TaskRunner.__init__ is invoked properly; it'd certainly be good practice to do so, even if we're skating by on an edge case. In fact, I'd feel a bit safer if all three derived classes explicitly did that, but I've got to imagine Python also handles multiple inheritance correctly if you don't define __init__ at all.
Hide
Paul Price added a comment -
I don't think that's necessary.
• BatchTaskRunner overrides __init__, run and __call__.
These lists are orthogonal, so the correct parent should be firing without having to worry about the MRO. Since you mentioned it explictly, take the example of __init__:
• SingleFrameTaskRunner.__init__ isn't defined, so we go looking up the MRO.
• BatchTaskRunner is next in the MRO, and it defines __init__, so that fires.
• That explicitly calls TaskRunner.__init__ (not using super).
• That doesn't call any other __init__ method.
Note that nothing is calling ButlerInitializedTaskRunner.__init__, and that's good because it doesn't override that method. So I think everything fires that needs to, and nothing fires that shouldn't. If we used super somewhere but not everywhere, that would create trouble, but I think this does the right thing now without any changes required.
Please let me know if you disagree or still have concerns. I'll wait on your final OK before merging.
Show
Hide
Jim Bosch added a comment -
I don't disagree with your analysis that the current code is safe, but I still worry that it's fragile w.r.t. the addition of new subclasses. That said, TaskRunner is perhaps not long for this world with SuperTask on the horizon, so it's probably not necessary to fix this now.
Show
Jim Bosch added a comment - I don't disagree with your analysis that the current code is safe, but I still worry that it's fragile w.r.t. the addition of new subclasses. That said, TaskRunner is perhaps not long for this world with SuperTask on the horizon, so it's probably not necessary to fix this now.
Hide
Paul Price added a comment -
Merged to master.
Show
Paul Price added a comment - Merged to master.
Hide
Paul Price added a comment -
Thanks, I'll go ahead and merge now.
I'm not sure how this is any more fragile than usual, nor how super would fix it (using super in a class hierarchy that doesn't all use super is dangerous, and I thought you were against using it for that reason); perhaps you could explain that offline for me?
Show
Paul Price added a comment - Thanks, I'll go ahead and merge now. I'm not sure how this is any more fragile than usual, nor how super would fix it (using super in a class hierarchy that doesn't all use super is dangerous, and I thought you were against using it for that reason); perhaps you could explain that offline for me?
Hide
Jim Bosch added a comment -
To be honest, while I do remember being unhappy with super in the past, I don't remember the arguments for and against anymore, and I don't think that unease extended to diamond inheritance when all classes have a nontrivial constructors; I think super is till the only way to handle that situation correctly.
I wonder what the situation is in Python 3 (or futurized Python 2), where I hear super is better - Tim Jenness?
Show
Jim Bosch added a comment - To be honest, while I do remember being unhappy with super in the past, I don't remember the arguments for and against anymore, and I don't think that unease extended to diamond inheritance when all classes have a nontrivial constructors; I think super is till the only way to handle that situation correctly. I wonder what the situation is in Python 3 (or futurized Python 2), where I hear super is better - Tim Jenness ?
Hide
Tim Jenness added a comment - - edited
Show
Tim Jenness added a comment - - edited Python 3 super docs are at https://docs.python.org/3/library/functions.html?highlight=super#super which then tells you to read https://rhettinger.wordpress.com/2011/05/26/super-considered-super/
## People
• Assignee:
Paul Price
Reporter:
Paul Price
Reviewers:
Jim Bosch
Watchers:
Jim Bosch, Paul Price, Tim Jenness | 2017-07-27 00:39:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3626590967178345, "perplexity": 5065.511998810685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426693.21/warc/CC-MAIN-20170727002123-20170727022123-00369.warc.gz"} |
https://msp.org/pjm/2021/315-2/p04.xhtml | #### Vol. 315, No. 2, 2021
Recent Issues Vol. 317: 1 Vol. 316: 1 2 Vol. 315: 1 2 Vol. 314: 1 2 Vol. 313: 1 2 Vol. 312: 1 2 Vol. 311: 1 2 Vol. 310: 1 2 Online Archive Volume: Issue:
The Journal Subscriptions Editorial Board Officers Contacts Submission Guidelines Submission Form Policies for Authors ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Special Issues Author Index To Appear Other MSP Journals
Distance and the Goeritz groups of bridge decompositions
### Daiki Iguchi and Yuya Koda
Vol. 315 (2021), No. 2, 347–368
DOI: 10.2140/pjm.2021.315.347
##### Abstract
We prove that if the distance of a bridge decomposition of a link with respect to a Heegaard splitting of a $3$-manifold is at least $6$, then the Goeritz group is a finite group.
##### Keywords
bridge decomposition, curve complex, Goeritz group
##### Mathematical Subject Classification
Primary: 57K10, 57M60 | 2022-06-29 12:09:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6871941685676575, "perplexity": 14071.980206220238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00383.warc.gz"} |
https://math.stackexchange.com/questions/2009258/when-can-we-express-a-matrix-in-a-smaller-basis/2013785 | # When can we express a matrix in a smaller basis?
Let's suppose that we have two orthonormal basis, $V=\{|v_i\rangle \}$, $i=1,...n$ (ket notation) and $U=\{|u_j\rangle\}$, $j=1,..,m$, with $m<n$ (i.e. $\dim U< \dim V$), where $|u_j\rangle=\sum_{i=1}^{n}c_i^j |v_i\rangle$. Given that $[A]_V$ is the matrix representation of some transformation $A$ in the basis $V$, under which conditions $[A]_U$ is the matrix representation of the same transformation $A$ in the basis $U$?
$$[A]_{U}=\left[ \begin{array}{c} \langle u_1| \\ \vdots\\ \langle u_m| \end{array} \right] [A]_{V}\left[ \begin{array}{c} |u_1\rangle & \cdots & |u_m\rangle| \end{array} \right]$$
• Are you assuming matrix A is not full-rank? – DBPriGuy Nov 11 '16 at 13:07
• This doesn't make sense without additional assumptions. $A$ represents a linear mapping from an $n$-dimensional space to itself. The set $K$ only spans a two-dimensional subspace of that bigger space, and you can only restrict the mapping $A$ to that subspace if it happens to be invariant under $A$ (that is, if it the case that you whenever you apply $A$ to a vector in the subspace you get something which is still in the subspace). – Hans Lundmark Nov 11 '16 at 13:09
• I have changed the original question. I don't know if it now makes sense. – AndyK Nov 11 '16 at 18:25
• The written equation is always true, you are just defining the matrix $A_U$. Do you mean to ask when there exists a matrix $A_U$ such that $A_V = \text{[...something involving$A_U$...]}$? – Rahul Nov 11 '16 at 18:56
• Ok, I added an answer which may clarify my question. – AndyK Nov 12 '16 at 16:04
A linear transformation $A$ acts on a vector space $X$. If the $X$ is finite dimensional, then every basis of $X$ has the same number of vectors (which is equal to $\dim X$). Hence, your question doesn't make sense from the start: you cannot have two (orthonormal) bases of $X$ with different numbers of basis vectors.
• What about an incomplete basis of $X$? Please see my answer below. – AndyK Nov 14 '16 at 19:42
• What's an incomplete basis? I assume you mean an orthonormal set that's not spanning (which is not a basis). Then, the matrix of $[A]_U$ is $m \times m$ but the matrix $[A]_V$ is $n \times n$. If $n \neq m$, how can you expect these matrices to be "the same"? – Jon Warneke Nov 14 '16 at 20:08
There $\nexists$ a linear transformation between two Vector Spaces with different dimensions. If there was, it would be a homomorphism and two Vector Spaces $M, N$ are homomorphic $\iff \dim M = \dim N$.
• This is not true. Consider the projection transformation $T : \mathbb R^2 \to \mathbb R$ given by $T(x, y) = x$. Then $T$ is a linear transformation (as you can check) between vector spaces of different dimensions. – Jon Warneke Nov 14 '16 at 16:47
I figured out the answer to my badly posed question, so please allow me to post it.
For simplicity I use $n=3$ and $m=2$. The basis vectors $|v_i\rangle$ are orthonormal and the new basis vectors $|u_j\rangle$ are defined as linear combintions of $|v_i\rangle$: $$|u_j\rangle=\sum_{i=1}^{3}c_i^j|v_i\rangle$$ In order them to be orthonormal too, the condition is: if $c_i^1\neq 0$ then $c_i^2 = 0$, $\forall i$. Let's suppose, without loss of generality, that \begin{align} |u_1\rangle &=c_1|v_1\rangle+c_2|v_2\rangle=\left[ \begin{array}{c} c_1 \\ c_2\\ 0 \end{array} \right]_V\tag{1}\\ |u_2\rangle &=c_3|v_3\rangle=\left[ \begin{array}{c} 0 \\ 0\\ c_3 \end{array} \right]_V \tag{2} \end{align}
Next, consider some operator $A$, with the following matrix representation in the basis $V$: $$[A]_{V}=\left[ \begin{array}{ccc} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33} \end{array} \right]_V=\sum_{i,j=1}^{3}A_{ij}|v_i\rangle\langle v_j|$$ where $A_{i,j}\equiv \langle v_i|A|v_j \rangle$, $i,j=1,2,3$.
Let's suppose that we can write: $$[A]_{U}=\left[ \begin{array}{cc} A'_{11} & A'_{12} \\ A'_{21} & A'_{22} \end{array} \right]_U=\sum_{i,j=1}^{2}A'_{ij}|u_i\rangle\langle u_j|$$ where $A'_{i,j}\equiv \langle u_i|A|u_j \rangle$, $i,j=1,2$. From $(1)$ and (2) we can express $|u_i\rangle \langle u_j|$ in terms of $|v_k\rangle \langle v_{\ell}|$, $k,\ell=1,2,3$. Therefore \begin{align} [A]_{V} &=A'_{11}\left(|c_1|^2|v_1\rangle \langle v_1|+c_1 c_2^*|v_1\rangle \langle v_2|+c_2 c_1^*|v_2\rangle \langle v_1|+|c_2|^2|v_2\rangle \langle v_2|\right)\\ &+A'_{12}\left(c_1 c_3^*|v_1\rangle \langle v_3|+c_2 c_3^*|v_2\rangle \langle v_3|\right)\\ &+A'_{21}\left(c_3 c_1^*|v_3\rangle \langle v_1|+c_3 c_2^*|v_3\rangle \langle v_2|\right)\\ &+A'_{22}|c_3|^2|v_3\rangle \langle v_3|\\ \end{align} and we get the following conditions: \begin{align} \frac{A_{11}}{|c_1|^2}&=\frac{A_{12}}{c_1 c_2^*}=\frac{A_{21}}{c_2 c_1^*}=\frac{A_{22}}{|c_2|^2}=A'_{11} \\ \frac{A_{13}}{c_1 c_3^*}&=\frac{A_{23}}{c_2 c_3^*}=A'_{12} \\ \frac{A_{31}}{c_3 c_1^*}&=\frac{A_{32}}{c_3 c_2^*}=A'_{21}\\ \frac{A_{33}}{|c_3|^2}&=A'_{22} \end{align} If they are all satisfied, we can express $A$ in the basis $U$: $$[A]_U=\left[ \begin{array}{c} \langle u_1| \\ \langle u_2| \end{array} \right]_V [A]_V \hspace{0.2cm}\left[ \begin{array}{cc} |u_1\rangle & |u_2\rangle \end{array} \right]_V=\left[ \begin{array}{cc} A'_{11} & A'_{12} \\ A'_{21} & A'_{22} \end{array} \right]_U$$ This of course can be generalized to any $n$ and $m$, $n>m$.
Edit: I spotted the mistake in my answer: the vectors $|u_i\rangle$, $i=1,2$ of course do not form another basis of the $3\times 3$ vector space $V_A$ on which the operator $A$ acts, $A:V_A\to V_A$. They form an incomplete orthogonal set, $S\subset V_A$ (which is a plane in $V_A$). If the mentioned conditions are all satisfied, this just means that $A$ projects all the vector of $V_A$ to $S$, $A:V_A\to S$. But this doesn't mean that we can write $A$ as $2\times 2$ matrix because this would restrict its action to $S$, i.e. $A:S\to S$, which is not true for $A$, because by definition it acts on whole space $V_A$. | 2019-08-19 12:15:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970731139183044, "perplexity": 256.4390383490372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314732.59/warc/CC-MAIN-20190819114330-20190819140330-00448.warc.gz"} |
https://dwwiki.mooo.com/wiki/Spells_of_Thee_Moste_Ceremonious_Rite_of_AshkEnte | # Rite of AshkEnte
The Rite of AshkEnte is a ceremony that can summon Death to ask a favour. It consists of two spells, both found in the book Spells of Thee Moste Ceremonious Rite of AshkEnte.
# The Rite of AshkEnte
The Rite of AshkEnte is an ancient ceremony used to summon Death, or at least anyone filling in for him at the time.
Since Death is a very busy anthropomorphic personification, he can act somewhat annoyed at being interrupted in this fashion.
Disappointed wizards report that the only requests Death is willing to consider at this time are those from the deceased who cannot be raised or resurrected normally due to lack of lives in reserve, to ask for another go at corporeality.
The rite has several stages that must all be completed successfully before a request can be made.
• The spell Malich's AshkEnte Circle must be cast with occult items, with those worth more being able to place Death in a more favourable mood to answer requests.
• The circle or special octogram must then be fully charged by 7+1 wizards with the command recharge. This takes time and backfires can do more than 2000 hp of damage.
• The spell Malich's AshkEnte Summoning Incantation can then be attempted to actually summon Death.
• If everything goes well Death should then appear to respond to requests for a short time. His mood should in part depend on the quality of occult items used when creating the circle.
Warning: Please see the help at the end of this page (or 'help ashkente' on the mud) for the consequences for a player brought back from beyond their last death - there are penalties!
# Malich's AshkEnte Circle
Malich's AshkEnte Circle
Spell information
Nickname mac
Guild Wizards
Type Miscellaneous
Description Creates a circle to summon Death.
GP cost 20
Mind space 10
Thaums 2
Components staff, occult items (consumed): phoenix egg, ram skull, candlestick, beeswax candle
Tome Spells of Thee Moste Ceremonious Rite of AshkEnte
Malich's AshkEnte Circle (abbreviated as MAC) is a high order miscellaneous spell which begins the long and overcomplicated version of the ritual used to summon Death.
## Spell Details
This spell costs 20 GP to cast and takes up 10 units of mind space.
The components required are:
• a staff
• the following 4 occult items (consumed)
• a phoenix egg
• a ram skull
• a candlestick
• a beeswax candle
The occult items are sold in various magic shops (including, at least, the shop in Hillshire, and Strumplott's Talismans in the Magical Supplies Empormium on Sator Square in Ankh-Morpork). The first three each exist in a "real" version, selling at A$5000, and cheap plastic/wooden versions for A$100.
Example shop listing from Hillshire:
A: an iron candle holder for A$100 (one left). B: a cheap wooden ram skull for A$100 (one left).
C: a cheap wooden phoenix egg for A$100 (one left). D: a golden candle holder for A$5000 (one left).
E: a ram skull for A$5000 (one left). F: a phoenix egg for A$5000 (one left).
### Skills
The following skills are used in the stages of this spell:
Success rates
Skills Equal likelihood Little more likely Likely to succeed Very likely to succeed Almost certainly succeed Most certainly succeed
Channeling 204 211 219 228 243 >270
Chanting 200 205 215 225 240 >270
Talisman 140 152 160 170 180 >190
Staff 202 211 220 232 253 >280
### Casting messages
Casting
You prepare to cast Malich's AshkEnte Circle.
You hold your staff out in front of you and close your eyes.
You begin a slow and solemn chant that resonates around the area.
You lay the ceremonial items down in significant positions around the circle.
You trace a dizzyingly complex pattern on the ground with the tip of your staff.
Success:
A large circle fades into existence on the floor and begins to glow in a decidedly eldritch fashion.
Others see:
# Charging the circle
The 7+1 wizards taking part in the ceremony must channel some of their power into the circle until it is fully charged.
This uses the recharge command, "recharge ashkente circle".
Fully charged AshkEnte circle:
A large complicated octogram on the floor glows in eldritch fashion.
Even the untrained eye could tell that this circle, with all its squiggles and lines, is of above-average importance.
The octarine light pulsing around its shape prove that beyond doubt, however.
Eight points on the edge of the circle glow with particular brilliance.
# Malich's AshkEnte Summoning Incantation
Malich's AshkEnte Summoning Incantation
Spell information
Nickname masi
Guild Wizards
Type Miscellaneous
Description Summons Death to bring back a player from final death.
GP cost 120
Mind space 60
Thaums 12
Components charged AshkEnte circle
Tome Spells of Thee Moste Ceremonious Rite of AshkEnte
Malich's AshkEnte Summoning Incantation (abbreviated as MASI) is a high order miscellaneous spell which brings Death to the room, anthropomorphically speaking. It requires that all previous stages in the AshkEnte ritual have been completed successfully.
## Spell Details
This spell costs 120 GP to cast and takes up 60 units of mind space.
### Skills
The following skills are used in the stages of this spell:
Success rates
Skills Equal likelihood Little more likely Likely to succeed Very likely to succeed Almost certainly succeed Most certainly succeed
Channeling 334 344 354 364 380 >400
Abjuring 340 350 360 370 390 >410
Summoning 360 370 380 395 405 >435
### Casting messages
Casting
You prepare to cast Malich's AshkEnte Summoning Incantation.
You raise your arms to the heavens and feel energy flow through you.
Wisps and shadows of things unknown swirl around you, and you struggle to retain your balance while continuing your ritual.
You call upon Him and wait apprehensively.
Success:
With a strangely organic pop Death appears in the centre of the circle.
Death asks: WHAT IS ALL THIS?
Death surveys the scene, taking in the markings on the floor and discarded occult miscellanea, and the people present.
Death says: I SEE. I DO WISH YOU'D STOP DOING THAT, I MIGHT HAVE BEEN BUSY.
Death says: THIS MUST BE VITALLY IMPORTANT.
Death looks expectant.
Others see:
Savant Poncho raises his arms to the heavens melodramatically.
Wisps and shadows of things unknown dance in amongst you, and Savant Poncho struggles to retain his balance while continuing the ritual.
Savant Poncho calls upon Him and waits apprehensively.
The air around you crackles with power.
With a strangely organic pop Death appears in the centre of the circle.
# Requests
The ghost wishing corporeality must then request this to Death. If he's in a good enough mood he'll grant a favour.
The ghost of Mournful Mancow says: please request favour raise death life
Death turns to face the ghost of Mournful Mancow.
Death says: IT SEEMS THAT THESE PEOPLE CARE ENOUGH ABOUT YOU TO ASK FOR ANOTHER CHANCE ON YOUR BEHALF.
Death says: I SEEM TO RECALL LOOKING THE OTHER WAY A FEW TIMES BEFORE.
Death sighs.
Death says: THINK OF THIS AS AN EARLY HOGSWATCH PRESENT.
Death gestures to the wizards gathered here.
Death retrieves an hourglass from his robe and fiddles with it for a moment.
Mancow appears in more solid form.
A soft light fills the air around Mancow and dissipates, leaving him looking whole and well.
Death says: I BELIEVE I AM NEEDED ELSEWHERE.
Death says: GOODBYE. FOR NOW.
Death snaps his fingers and fades away.
The large eldritch circle pulses and disappears. | 2023-03-30 02:25:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21227240562438965, "perplexity": 11103.597627930865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00728.warc.gz"} |
https://minewiki.engineering.queensu.ca/mediawiki/index.php?title=Estimation_of_the_potential_production_rate&diff=3101&oldid=3100 | # Difference between revisions of "Estimation of the potential production rate"
Article written by Sci '14: Travis Dominski, Brett Kolankowski, Andrew Marck, Michal Pasternak, Steve Shamba
Production rate and mine life, play a large role in determining the project economics. A higher production rate typically allows for lower operating costs, while the subsequent shorter mine life maximizes the Net Present Value of ore extraction. However, a higher production rate requires a greater capital cost, as larger equipment and infrastructure is required. Estimation of production rate is a problem that has been looked at by many scholars. The most well-known scholar to look at the problem was H. K. Taylor who developed the empirical Taylor’s Rule, a rule of thumb that is commonly taught to Mining Engineering students. While the most popular, Taylor’s Rule is not the only method that can be used when estimating production rate. Taylor’s Rule only takes into account tonnage, while other methods use the grade of the ore and financial factors.
## Taylor’s Rule
Taylor's Rule of Thumb for Mine Life
H. K. Taylor a mining engineering working with Placer Development Limited, proposed "Taylor's Law" at mine valuation and feasibility study seminar in Spokane, Washington in 1976[1]. This rule was then published in 1977. Taylor realized the need for such a rule as the existing "supposedly optimum mining rate have long been estimated by elementary economic theory, usually by present-value methods, but it has been observed that many such exercises show a bias towards high rates of working that are unachievable or undesirable in practice."[1] The previous methods had lead to inexperienced companies proposing mines with wildly unrealistic rates. Taylor based the emperical rule off of nearly 30 mining projects, mostly young mines.
Taylor's rule was test in 1984 by McSpadden and Schaap, who checked it against 45 open-pit copper deposits. McSpadden and Schaap found the rule needed to be tweaked slightly, however this finding was due to the specivity of their mine types compared to Taylor's wide range of mine types.
### Equation for Mine Life and Production Rate
The empirical equation for mine life that Taylor developed is:
The equation can be used to find the production rate by
Assuming a mine operating 350 days a year, Taylor's rule gives the equation
Taylor's rule was originally developed for metric tonnes, but it can be applied to short tons. As the difference between metric tonnes and short tons is lessened by the 4th root, the resulting error in mine life would only be +2.5%.
### Limitations
Taylor identified several senarios where his rule of thumb does not work well:
• Old mines far into the stages of operation (can work faster)
• Unusually large deposits (>200Mt as suggested mining rate would be unobtainable)
• Very deep, flat ore bodies (production limited by hoisting limits of shaft)
• Steeply dipping tabular or massive deposits that are worked in steps towards great depths (limited to rate of deepening of the working)
• Erratically mineralized multi-vein systems (production rate limited to discovery rate)
### USBM/USGS Modifications
Modifications of Taylor's Rule
Taylor's rule has been modified and tweaked by the United States Bureau of Mines (USBM) and its successor the United States Geological Survey (USGS) to a large and more modern set of data.[2] All the modicications of Taylor's rule use the same general relationship, and just revise the variables.
D.A. Singer, W.D. Menzie and K.R. Long revised Taylor's rule in 1998, based on a data set of 41 open pit gold-silver mines[3]. Their model found that appropriate rates should for open pit gold-silver mines should be significantly higher than Taylor's Rule suggests. Their resulting equation was:
D.A. Singer, W.D. Menzie and K.R. Long also adapted Taylor's rule for underground massive sulfide deposits in 2000.[4] In this modification it clear that the Taylor's rule overestimates the underground mining rate as it was calibrated to open pit mines. The resulting equation for underground massive sulfide deposits is:
Long and Singer further studies Taylor's rule in 2001 and calibrated it to 45 open pit copper mines.[5] The open pit copper model proved to have a curve halfway between the 1998 gold-silver curve and Taylor's Rule. Since the mines are the same type as those used for Taylor's rule, it can be seen that the realistic production rate has increased in the decades since Taylor's rule was first developed.
The latest study on Taylor's rule was completed by Long in 2009.[6] Long's study is the most extensive of all studies looking at the relationship between Capacity and Reserve. The study looked at 342 open pit and 197 underground mines located in the Americas and Australia. Long found that there was a significant difference between the production rate of underground vs. open pit and block caving. The equation found for underground deposits was found to be:
The equation for open pit and block caving deposits was found to be:
Long's 2009 study also found that introducing the variables grade and capital cost played a factor in estimating production rate, however expected tonnage was the primary factor. Long did generate equations involving grade and capital cost for open pit, however the inputs for these equations were not clarified.
### Applicability
Many scoping studies use the original Taylor's Rule as a starting point for production rate regardless of the type of mine. It is clear from the USBM/USGS modifications of Taylor's Rule that a better estimate is possible adding the variable of mine type as well as expected tonnage. The table below acts as a guide to selecting an appropriate version of Taylor's rule. The general equation for Taylor's rule is:
Mine Type a b Source # Mines
Unknown 0.0143 0.75 Taylor (1986) ~30
Open Pit - Gold/Silver 0.416 0.5874 Singer, Menzie, Long (1998) 41
Open Pit - Copper 0.0236 0.74 Singer, Long (2001) 45
Open Pit/Block Caving - Other 0.123 0.649 Long (2009) 342
Underground - Massive Sulfide 0.0248 0.704 Singer, Menzie, Long (2000) 28
Underground - Other 0.297 0.562 Long (2009) 197
## Other Methods
### Wells (1978)
In 1978, H.M. Wells published the paper “Optimization of mining engineering design in mineral valuation”, which proposed maximizing the present value ratio (PVR) in order to find the optimal production rate.[7] PVR was the ratio of PVOUT (the present value of positive cash flows) to PVIN (the present value of negative cash flows). A PVR greater than 1 represented a profitable production rate while a PVR less than 1 was an unprofitable production rate. The optimal production rate is the rate that causes the PVR to be at its maximum.
### Lizotte and Elbrond (1982)
Y. Lizotte and J. Elbrond researched optimization of production rates in 1982. They approached the problem using open-ended dynamic programming and created a model for the problem. However they concluded that there were vast difference between their model and realistic mining.[7]
### Cavender (1992)
B. Cavender took a theoretical approach to determining appropriate mine life looking at the finance side of mining. He developed three techniques for finding the mine life that optimized the NPV of the project. Cavender looked at cash flow, stochastic risk modeling, and option pricing. Since the model deals with a hypothetical mining and does not include realistic mining constraints, it has little application to real mine design.
### Smith (1997)
L.D. Smith in 1997 found that estimating a mines production rate was better determined by a range than a specific point. Smith's paper proposed that an appropriate range of production rates with a upper limit as the rate that resulted in the highest NPV. The lower limit of this range was found to be the rate that best repaid capital costs.
### Abdel Sabour (2002)
Effects of various parameters on optimal production rate (Abdel Sabour)[7]
Using a mathematical model, S.A. Abdel Sabour looked at the effect of various physical, economic and financial factors on the optimal production rate.[7] For the physical factors it was found that the optimal production rate increases with both the tonnage and grade of the deposit. A higher gold price resulted in a higher production rate. The production rate also depended on the expected growth rate of gold prices, with a higher growth rate resulting in a lower production rate. If mining cost growth rate is expected to be high, the optimal production rate is should also be high to avoid higher mining costs in later years. The final factor considered was the cost of capital (discount rate). It was found that for a low (~5%) and high(~35%) cost of capital, the production rate should be low, however between these points of cost and capital, the production rate should be higher.
Abdel Sabour's definition of the optimal production rate is the rate that generates the highest NPV. To obtain the optimal rate this microeconomic theory was applied. Since these optimal production rates look at economic theory rather than engineering design constrains, his work is not useful on it's own to estimate production rate. It can however can be used to tweak the results of other production rate estimates, such as Taylor's rule.
## Summary
Taylor's rule is the best way get a preliminary estimate of the production rate and the mine life during mine design. This is due to its simplicity of calculation since it involves only one variable, as well as the real world applicability of the rule since it is built upon real world data. Modifications by the USBM/USGS should be considered when using Taylor's rule, as they tweak it to better suit the type of mine.
## References
1. Taylor, H.K., 1986, Rates of working mines; a simple rule of thumb: Transactions of the Institution of Mining and Metallurgy, v. 95, section A, p. 203-204.
2. Cite error: Invalid <ref> tag; no text was provided for refs named num1
3. Singer, D.A., Menzie, W.D., and Long, K.R., (1998). A simplified economic filter for open-pit gold-silver mining in the United States: U. S. Geological Survey Open-File Report 98-207, 10 p., accessed February 02, 2014, at http://geopubs.wr.usgs.gov/open-file/of98-207/OFR98-207.pdf
4. Singer, D.A., Menzie, W.D., and Long, K.R., (2000), A simplified economic filter for underground mining of massive sulfide deposits. accessed February 02, 2014, at http://pubs.usgs.gov/of/2000/0349/report.pdf
5. Long, K.R., and D.A. Singer, (2001), A Simplified Economic Filter for Open-Pit Mining and Heap-Leach., accessed February 02, 2014, at http://geopubs.wr.usgs.gov/open-file/of01-218/of01-218.pdf
6. Long, K.R., (2009), A Test and Re-Estimation of Taylor's Empirical Capacity–Reserve Relationship. accessed February 02, 2014, at http://link.springer.com/article/10.1007%2Fs11053-009-9088-y
7. Abdel Sabour, S. A. "Mine size optimization using marginal analysis." Resources Policy 28, no. 3 (2002): 145-151. | 2022-10-02 03:16:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6292984485626221, "perplexity": 3493.996152623281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00359.warc.gz"} |
https://handwiki.org/wiki/Infinite_set | # Infinite set
Short description: Set that is not a finite set
Set Theory Image
In set theory, an infinite set is a set that is not a finite set. Infinite sets may be countable or uncountable.[1][2]
## Properties
The set of natural numbers (whose existence is postulated by the axiom of infinity) is infinite.[2][3] It is the only set that is directly required by the axioms to be infinite. The existence of any other infinite set can be proved in Zermelo–Fraenkel set theory (ZFC), but only by showing that it follows from the existence of the natural numbers.
A set is infinite if and only if for every natural number, the set has a subset whose cardinality is that natural number.
If the axiom of choice holds, then a set is infinite if and only if it includes a countable infinite subset.
If a set of sets is infinite or contains an infinite element, then its union is infinite. The power set of an infinite set is infinite.[4] Any superset of an infinite set is infinite. If an infinite set is partitioned into finitely many subsets, then at least one of them must be infinite. Any set which can be mapped onto an infinite set is infinite. The Cartesian product of an infinite set and a nonempty set is infinite. The Cartesian product of an infinite number of sets, each containing at least two elements, is either empty or infinite; if the axiom of choice holds, then it is infinite.
If an infinite set is a well-ordered set, then it must have a nonempty, nontrivial subset that has no greatest element.
In ZF, a set is infinite if and only if the power set of its power set is a Dedekind-infinite set, having a proper subset equinumerous to itself.[5] If the axiom of choice is also true, then infinite sets are precisely the Dedekind-infinite sets.
If an infinite set is a well-orderable set, then it has many well-orderings which are non-isomorphic.
Infinite set theory involves proofs and definitions.[6] Important ideas discussed by Burton include how to define "elements" or parts of a set, how to define unique elements in the set, and how to prove infinity.[6] Burton also discusses proofs for different types of infinity, including countable and uncountable sets.[6] Topics used when comparing infinite and finite sets include ordered sets, cardinality, equivalency, coordinate planes, universal sets, mapping, subsets, continuity, and transcendence.[6] Candor's set ideas were influenced by trigonometry and irrational numbers. Other key ideas in infinite set theory mentioned by Burton, Paula, Narli and Rodger include real numbers such as pi, integers, and Euler's number.[6][7][8]
Both Burton and Rogers use finite sets to start to explain infinite sets using proof concepts such as mapping, proof by induction, or proof by contradiction.[6][8] Mathematical trees can also be used to understand infinite sets.[9] Burton also discusses proofs of infinite sets including ideas such as unions and subsets.[6]
In Chapter 12 of The History of Mathematics: An Introduction, Burton emphasizes how mathematicians such as Zermelo, Dedekind, Galileo, Kronecker, Cantor, and Bolzano investigated and influenced infinite set theory.[6] Potential historical influences, such as how Prussia's history in the 1800's, resulted in an increase in scholarly mathematical knowledge, including Candor's theory of infinite sets.[6]
Mathematicians including Zermelo, Dedekind, Galileo, Kronecker, Cantor, and Bolzano investigated or influenced infinite set theory. Many of these mathematicians either debated infinity or otherwise added to the ideas of infinite sets.[6]
One potential application of infinite set theory is in genetics and biology.[10]
## Examples
### Countably infinite sets
The set of all integers, {..., -1, 0, 1, 2, ...} is a countably infinite set. The set of all even integers is also a countably infinite set, even if it is a proper subset of the integers.[4]
The set of all rational numbers is a countably infinite set as there is a bijection to the set of integers.[4]
### Uncountably infinite sets
The set of all real numbers is an uncountably infinite set. The set of all irrational numbers is also an uncountably infinite set.[4] | 2023-02-01 07:51:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9366691708564758, "perplexity": 435.38705479056745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00092.warc.gz"} |
https://aviation.stackexchange.com/questions/87738/were-propeller-airplanes-significantly-more-scary-to-fly-in-compared-to-modern | # Were propeller airplanes significantly more "scary" to fly in compared to modern jet ones?
I've flown exactly once in my life. It was in 2004, with a commercial jet. Not a "jumbo", but a normal-sized jet plane in common use for cheap flights in that year. (Well, twice if you consider the trip back home as well, with a similar or identical plane.)
I found the experience quite scary. especially as it was going up and up, seemingly turning off the engines every now and then. The plane was "creaking" in a scary manner and it felt like the entire "main body" of the aircraft would snap in half at any moment, or that it would go too steeply upward and thus "turn around" and fall down on the ground or the sea.
Still, I wasn't hysterical and making a scene or anything. It just felt really scary in my stomach, perhaps understandably so since I had not grown up flying like most people seem to have. I was 18 when this experience happened.
I've many times watched old movies and documentaries and read about old airplanes, the "classic" kind which used propellers instead of the jet engines. I also assume that there must be multiple very different "generations" of jet-based airplanes as well, each one becoming more and more "comfortable" and "less scary" to travel in. Is this a correct assumption?
My question is: did old airplanes, for example ones in commercial use in the 1970s or 1950s, shake and creak more than modern ones, which would make them feel "scarier" to somebody like myself? I have this idea, which may or may not be accurate, that a jet engine is more "even" and that it perhaps has more "strength" to go through stormy weather and whatnot in a less "jerky" manner.
But on the other hand, older airplanes seemed to have way more room, less people and way better service, and not built with cost-cutting due to "ultra-cheap" tickets (I still find them expensive for my wallet), so maybe the overall experience was still more comfortable in those days?
I'm wondering about both the technical facts as well as the general perceived experience.
• Comments are not for extended discussion; this conversation has been moved to chat.
– Farhan
Jun 19 '21 at 20:10
Old propeller airliners were much more prone to "bumpy" ride than modern jets, for one major reason: they flew lower.
When I was in my early teens (almost fifty years ago), I flew several times aboard a Beechcraft 99, a low-wing, twin turboprop feeder aircraft with (IIRC) 14 or 18 seats. Unpressurized, and most if not all seats have a pretty good forward view out the aircrew's windshield, as well as to the side over the (IIRC) right wing.
This aircraft had a cruise speed of somewhere around 200 kt, and was unpressurized, so flew below 12,000 feet (3500 m). On a sunny day, there would be a bump every time the aircraft passed through a thermal column; if there was weather, every gust was palpable. During takeoff and especially landing, the wing outboard of the engine flexed visibly, and the noise from the propellers was enough to make conversation difficult (not to mention anyone you might talk to was seated either ahead of you or behind).
I didn't find this scary, because I was already an aviation enthusiast; I found it fascinating and exciting.
But now, consider that the Beechcraft 99 was a significant step up from a slightly larger aircraft of a few decades earlier, the DC-3. It had more power, quieter engines, and flew faster (not to mention smelled better -- at least to me, jet fuel kerosene is much more pleasant than leaded gasoline). It climbed faster and had more reserve in case of a go-around or engine-out, as well. Yet, the DC-3 was, when introduced, one of the highest performing transport aircraft in the world.
Every generation of aircraft has been an evolutionary improvement over the previous -- because that's why it replaced the one before. Even the smallest jet airliners of today are much larger, more robust, faster, and because they fly higher, smoother. The wings still flex -- as has every wing ever built since the Wright brothers got an engine -- and when you're low and slow, you'll feel bumps from turbulence and thermals, but you'll feel them less than you would have fifty years ago.
• Isn't another reason that small planes are more susceptible to turbulence, and commercial planes of the pre-jet era were typically smaller? So the motor type is a red herring: A B747-sized propeller plane flying at 30,000 ft would fly as smoothly (but I think their ceiling is lower, so they cannot avoid weather systems by climbing higher like a jet). Jun 16 '21 at 7:02
• @Peter-ReinstateMonica The ability to pressurize large propeller planes so they could fly higher came at the very end of their reign -- the Super Constellation was pressurized and could cruise high enough to be effectively turbulence free (and had similar passenger load to a first-gen 707) -- but by the time they fixed the fatigue problem, the 707 was in the market and DC-8 was nearly ready to sell. As I said, it's mostly about altitude (and to a lesser extent wing loading, relative to turbulence -- higher loading makes the airplane respond less to a bump). Jun 16 '21 at 13:50
• Last paragraph is true for other vehicles as well. Ever try driving a car from the 1950's or earlier? By modern standards it's a pretty harrowing experience. Weak brakes, sluggish (non-power) steering, hardly any shocks, no seatbelt, airbags, or any other safety equipment, you'll feel every bump in the road, and be constantly afraid you're going to lose control of the vehicle. Technology improvements make lots of things less scary... Jun 16 '21 at 16:28
• @jamesqf There's always a trace of fuel odor in the engine exhaust, and tiny planes like the Beech 99 were boarded directly from the tarmac, with engines idling in most cases, so you'd get a little of that smell. Jun 16 '21 at 18:22
• @Darrel Hoffman: With any decent car, you are going to get a good bit of road feel. Some of us don't regard "handles like a waterbed" as an improvement :-) But I guess it's subjective: I had quite a lot of fun in some '50s cars. and didn't regard them as scary at all, just as I don't think my '60s Cherokee is scary. Jun 17 '21 at 5:36
Yes, if you were riding on a DC-7 or Lockheed Constellation, the planes that the first passenger jets replaced, you would be crawling up the walls if you found a modern jet frightening.
Lots of noise and vibration from 4 Wright R-3350 radial engines, that belch great clouds of oil smoke when they start (to me, all the noise and smoke is a symphony, but not to most people I'm sure).
It's noisiest near the front where the propellers are, so the 1st class section was usually at the quieter back of the cabin instead of the front. In-flight engine shutdowns were relatively common with the complex turbo-compound version of the R-3350.
Vibration transmitted into the cabin from the engines will make the interior panels rattle and buzz (still a problem on turboprops today actually).
Service ceilings are in the area of 25000 feet, so you have to go through or around weather, not over it, so you may be in turbulence and icing during the cruise phase, not just during the brief departure/climb and descent/approach.
This movie, shown in theatres to promote air travel and hosted by Arthur Godfrey (an early 50s kind of Oprah Winfrey type tv host) captures what it was like pretty well.
Of course, unless you were upper middle class or higher, you would be going by train or ship anyway, pre jet age. Air fares were too expensive for regular people.
• Ah, the wonderful Constellation. Experiencing in-flight engine shut-downs so often it was lovingly referred to by its pilots as "the best three engine airplane in the skies". Jun 18 '21 at 19:37
To enlarge upon John K's answer:
At night, the exhaust stacks (visible through the windows!) on the DC-7C glowed red-hot and the exhaust gas itself glowed blue and pink as it flowed back along the engine nacelle, flickering all the way to the trailing edge of the wing. This scared the heck out of me in 1960 while flying from LAX to Copenhagen via Winnipeg and Soendre Stromfjord.
In the daytime, the propeller de-icing systems would throw ice particles off the blades. Since the restrooms were positioned in the plane of the propellers (to minimize loss of life if a blade came loose and got thrown through the fuselage) and the restrooms had a little window in them, those ice particles would strike the window and produce little puffs of ice crystals that you could see.
If the props were even slightly out of sync, ripples of vibrations would rhythmically sweep through the passenger cabin and a cup of water placed on the fold-down tray table in front of you would "walk" by itself to the edge of the table under the influence of the vibrations and fall to the floor.
All these things made flying around the world in a piston-engined prop plane a lot scarier than the same flight in a jet.
And to top it off: because you flew lower and slower, the plane was constantly heaving and bucking up and down. After 23 hours of this, I suffered from severe mal-de-debarquement syndrome and for the following 10 years, every time I got into a confined space like a clothes closet the floor would start bucking and heaving and I would get nauseated!
A decently modern turboprop is comparable to a modern jet powered plane. Not much difference in the passenger experience, apart from the noisier engines.
However, a small propellor powered plane like a Cessna is a totally different beast. I have only flown on those a few times and they are like being on a bicycle in the air. A jet powered commercial airliner feels like a large luxury sedan in comparison.
Everything on those tiny planes is rattling & shaking. Safety precautions are minimal at best. I would imagine older smaller prop powered planes would be similar.
• The first time I flew on a small plane was in a co-worker's Mooney. A screw dropped into my lap during the flight. That wasn't very comforting. Jun 17 '21 at 0:37
• Great subjective description of the experience, but “Safety precautions are minimal at best.” isn’t so accurate — any plane produced by an established manufacturer in the last 50 years has plenty of well-designed and regulated safety features and oversight. Jun 17 '21 at 9:56
• @PeterLeFanuLumsdaine A very large percentage of light single GA aircraft flying today were designed more than 50 years ago. :) The 150, 172, 182, and PA-28 were all designed in the 1950s, for example. Granted, I'd still agree that "safety precautions are minimal at best" to be inaccurate, though it would definitely seem that way when comparing them to the \$100M-\$400M passenger jets most people are accustomed to flying in. Jun 18 '21 at 21:01
As a teen I flew commuter flights in the mid 70's on propeller aircraft. These were turboprop twins that seated a handful of people, although I don't recall the exact count, and I never knew the make and model. I didn't find them scary, and my sister never seemed afraid.
They were annoying, however. They were very loud. The props were never synchronized exactly, so you had to listen to the beat frequencies generated by the heterodyning of the props. I hate that.
I don't recall them being bumpier than the jets we also flew on when we were young, but it's possible that it was bumpier and just didn't bother us.
There was some compensation for the annoyance. The pilots never closed the curtain behind them, so you always had a great view of approach and landing. That was especially fun at night with all the runway and approach lighting.
Although this site usually discards questions dealing with subjective matters, I'm glad to see this question has "survived" the peer review. My two cents:
Thing with "ye olde aircraft" is, that at their golden age they were the standard experience. Flying was supposed to be bumpy and noisy, and those flying frequently probably thought nothing of it. Throw a person from today back a few decades, and if not scared, they would be disappointed at least. The older planes would rattle and creak more, but so did other things in the daily lives of the people back then: cars, trains, ships, lawnmowers, buildings.
All in all, flying being scary has actually very little to do with what one flies on or with. Fear of flying as a psychological phenomenon is not rational. That's why it's referred to as flight phobia in scientific literature: a phobia is by definition an extreme or irrational fear of something, many definitions add a "disorder" to the description. I was unable to find quickly any studies about evolution of how people experience flying, but I would postulate that for a majority of people, it has not changed much over time.
It is worth mentioning that flying was more dangerous back then, statistics are very clear about this. From the seventies the fatalities in air travel have plummeted 12 fold from 6.35 to 0.51 fatal accidents / million flights (Wikipedia). So those embarking on a flight 50 years ago took a considerably larger risk than those taking a flight today.
More and more people can afford travel by air these days, that is a major factor when considering whether flying is more scary or not when comparing experiences over time. Planes are different, but so are the passengers as a group. Back in the days those flying were probably more adventurous, and flying carried a "high social stature". Might have been uncomfortable, but the bragging rights made up for it.
P.S. I personally never experienced older propeller aircraft (80's, so a bit outside the scope of this question) as any more scarier than the modern jets. As a kid, I flew several times on a regional propeller aircraft about the size of EMB 120 mentioned in a previous answer. I never found the ride to be scary, even though it was bumpier and noisier than what I now get to experience on contemporary and bigger jets. For me, it was exiting. A couple of years ago I was on a regional flight aboard an ATR-72, and the weather was bad. I mean really bad. Approach to home field was a real roller coaster ride with sharp bumps, long dips and high(ish) G soars. No doubt the cockpit was sprinkled with sweat after that. But was it scary? For me no, quite enjoyed it actually, as I trusted the plane and all the people involved in running the operation.
Unless something has gone seriously wrong, planes are only “scary” to those who don’t know what “normal” feels and sounds like.
I’ve flown in everything from jumbo jets to light GA trainers, and they all make numerous motions and sounds that will be completely unfamiliar to someone who’s never flown before. Our brains are wired at a deep level to be scared of the unknown, because running away from unknown dangers is a better long-term survival strategy than trusting them not to kill you. But you can’t run away on a plane, which just magnifies that fear.
As you fly more, you learn that despite these feelings and sounds, you arrive safely at your destination, so they become “known” and less scary. Your brain will even start anticipating them and take comfort in them because it means that things are going okay, and then it will start tuning them out. Eventually you will even start to wonder if something is wrong if you don’t experience those (formerly “scary”) motions or sounds when expected.
My high school band flew to a couple of national competitions, and many of the students had never flown before. They were paired with those of us who had, and I remember my seatmates waking me from my naps several times during each flight, panicked because they were certain we were going to die, and I had to reassure them that everything was fine before I could go back to sleep. Most were better on the second flight, and then a year or two later they’d be reassuring younger students themselves.
In the early 90s I had to fly on what I think was an Embraer EMB 120 Brasilia turboprop . As you can see, it's not a terribly large aircraft.
We were heading to Nashville and there was a major front moving in. It bounced that plane around worse than any flight I have ever been on. My poor and beleaguered mother was now terribly motion sick and the Brasilia had rather noisy landing gear, so when they deployed for landing, she grabbed my arm and begged me to tell her it was the landing gear. The pilot had already announced we were landing so I assured her it was. We landed without incident in the rain.
I never had anything similar in the ATR-72 aircraft that became common afterward. They were slightly larger, but I also suspect the airlines didn't fly them as readily into adverse conditions like that.
By contrast, the worst flight I ever had on a jet was on a 757. There was near-constant rippling of the wings, but nothing compared to that Nashville flight. The plane would "bounce" some, and I had serious vertigo the next day, but nothing serious.
I flew a lot of regional flights between the UK and Ireland in the 1990s, this was before Ryanair became what they are now, and the aircraft were often Short Brothers 360, which is a 36-seat twin prop. They were nicknamed 'The Vomit Comet' and for good reason.
• Twin turboprops, so very noisy.
• Unpressurised cabin, so uncomfortable ear equalisation problems for some people. Also this means a low ceiling meaning it could be difficult to go above bad weather.
• Small, so tossed around very easily. As a passenger in one of these an approach into Knock Airport on the west coast of Ireland in the teeth of a North Atlantic winter storm is a very different proposition than it would be in even one of the smaller jets.
• Hello Alan, Shorts Brothers are the world's oldest Aircraft manufacturer. Jun 18 '21 at 16:08 | 2022-01-25 03:43:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27424904704093933, "perplexity": 2440.4862019119114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00395.warc.gz"} |
https://math.stackexchange.com/questions/1172231/showing-the-function-z-is-analytic | showing the function |z| is analytic
So i need to shwo that f(z) = |z| is analytic. All i really have avavailable to me really are the cauchy riemann equations. So with that being the case i guess the assumption that the partials are continuous is present.
But how do i show analyticity for this function?
Following the advice from some earlier posts i saw, i should convert the function into u+iv form, but all all that gives me is (u^2 + v^2)^1/2.
What if i treated the modulus as a positive and negative case?
• $|z|$ is NOT analytic. – Crostul Mar 2 '15 at 17:57
• Note that "$u + iv$ form" means splitting the value into real and imaginary parts. Here you have $u(x, y) = \sqrt{x^{2} + y^{2}}$ and $v(x, y) = 0$. – Andrew D. Hwang Mar 2 '15 at 19:25
• No complex function taking only real values is analytic, unless it is constant. – lhf Mar 2 '15 at 19:27
• I had a feeling it wasn't, i just was not sure if i shold have treated the magnitude portion exckusively as the real portion of the complex value – dc3rd Mar 2 '15 at 20:51
$f(z) = |z|$ is not analytic. So you are being asked to prove something that is false.
A function $F(z)$ is analytic if $\dfrac{\partial F(z)}{\partial \bar{z}}=0.$ We have $f(z)=|z|=\sqrt{z \bar{z}}$. It is clear that $\dfrac{\partial f(z)}{\partial \bar{z}} \neq 0,$ so $|z|$ is not analytic.
Recall that
1. A complex function $f=u+iv:\Bbb C\to \Bbb C$ is analytic at a point $z_0=x_0+iy_0$ if there is a neighborhood $V=B(z_0,r)$ (say) of $z_0$ such that $f$ is differentiable (in the complex sense) at every point $z$ of $V$.
2. A necessary and sufficient condition that a complex function $f=u+iv:\Bbb C\to \Bbb C$ is differentiable at $w=a+ib$ is that
($i$) all the partial derivatives $u_x,u_y,v_x$and $v_y$ exist and continuous at $(a,b)\in\Bbb R^2$
($ii$) the Cauchy-Riemann equations $u_x=v_y$ and $u_y=-v_x$ must hold at $(a,b)\in \Bbb R^2$
For your problem $f(z)=|z|$ and so $u(x,y)=\sqrt{x^2+y^2}$ and $v(x,y)=0$. Clearly it can be observed that the condition $2(i)$ is failed to hold at $(0,0)$ for $u$. So $f$ is not analytic at any point of the complex plane.
Is the function $f(z)=|z|^2=z\bar{z}$ analytic?
Let $z=x+iy$. $f(z)=z\bar{z}=(x+iy)(x-iy)=x^2-y^2$
If $$f=u+iv$$ C-R imply
$\dfrac{\partial u}{\partial x}=2x\neq 0=\dfrac{\partial v}{\partial y}$
$\dfrac{\partial u}{\partial y}=-2y \neq 0=-\dfrac{\partial v}{\partial x}$
Then $f$ is not analytic
• oops: $|z| \neq z\bar{z}$ it is the square root of that. – Mark Fischler Mar 2 '15 at 19:10 | 2019-07-19 06:13:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770098686218262, "perplexity": 252.73606489341148}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526064.11/warc/CC-MAIN-20190719053856-20190719075856-00095.warc.gz"} |
http://mathhelpforum.com/math-topics/162721-vector-projections.html | 1. Vector Projections
I'm given two vectors: F=9i+12j, V=3i+4j. I am asked to give the component of F parallel to V, the component of F perpendicular to V and the work done by force F through displacement V.
Any help?
Peter
2. Well you can express F as:
$\vec{F} = 3(3i + 4j)$
Meaning that F is parallel to V.
Find the magnitude of F (using the Pythagoras' Theorem) this will be the parallel component.
The perpendicular component will be 0 since the two vectors are parallel.
Work Done = Force x Displacement.
Find the magnitude of the displacemnt V and you already have the force F, which is the parallel component of F to V.
3. Originally Posted by flybynight
I'm given two vectors: F=9i+12j, V=3i+4j. I am asked to give the component of F parallel to V, the component of F perpendicular to V and the work done by force F through displacement V.
$F_\parallel = \frac{{F \cdot V}}
{{V \cdot V}}V\;\& \,F_ \bot = F - F_\parallel
$ | 2016-12-08 21:32:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7681962847709656, "perplexity": 769.5027856752976}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542655.88/warc/CC-MAIN-20161202170902-00502-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://mathhelpforum.com/pre-calculus/205592-contradictory-derivative.html | 1. ## Contradictory Derivative
Let's say f(x) = x^2.
This would give f'(x) = 2x.
Now we change the definition of f(x) = x^2 to f(x) = x + x + x... up to x times.
If we now differentiate it with respect to x, we will get f'(x) = d(x)/dx + d(x)/dx + d(x)/dx ....up to x times. And this sums up to x. Thus according to new definition of f(x), the derivative f'(x) would be x and not 2x.
Though we all know that there is some gap in calculating the derivative in second method, what is the missing thing here? How can it be captured to cover every aspect?
2. ## Re: Contradictory Derivative
Hey vamosromil.
The subtle difference is that 1) this only applies if x is an integer and 2) such a summation assumes that we sum up a known amount of times (i.e. n*x where n is fixed).
If you add up d(x)/dx n times you get n which is what we expect.
The derivative of nx is n and does not depend on x at all.
You can't do what you did because what you are doing (and the way you are doing it) is ill-defined. If you are summing up a fixed amount of times is just (x + x + ... + x) n times and not "x" times (what if x is 1.123123798123978123 or pi)?
You have to be careful with how you specify things.
3. ## Re: Contradictory Derivative
Originally Posted by vamosromil
Now we change the definition of f(x) = x^2 to f(x) = x + x + x... up to x times.
So, $f(\sqrt{2})=\sqrt{2}+\sqrt{2}+\ldots +\sqrt{2}$ up to $\sqrt{2}$ times?
Edited: Sorry, I didn't see chiro's post.
4. ## Re: Contradictory Derivative
I wonder how can you write down 2.453+2.435+?... exactly 2.435 times?
5. ## Re: Contradictory Derivative
Hello everyone, I see there are some typical questions being asked. I posted this problem on another forum too. And the first question of writing a real number other than integer that many number of times has also been addressed. But the core explanation is yet to be found..here is the link for that forum: Contradictory Derivative
6. ## Re: Contradictory Derivative
Originally Posted by vamosromil
But the core explanation is yet to be found..here is the link for that forum: Contradictory Derivative | 2017-03-26 15:51:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8339522480964661, "perplexity": 996.399846894736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189242.54/warc/CC-MAIN-20170322212949-00619-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://andrewpwheeler.com/tag/scatterplot/ | Making smoothed scatterplots in python
The other day I made a blog post on my notes on making scatterplots in matplotlib. One big chunk of why you want to make scatterplots though is if you are interested in a predictive relationship. Typically you want to look at the conditional value of the Y variable based on the X variable. Here are some example exploratory data analysis plots to accomplish that task in python.
I have posted the code to follow along on github here, in particular smooth.py has the functions of interest, and below I have various examples (that are saved in the Examples_Conditional.py file).
Data Prep
First to get started, I am importing my libraries and loading up some of the data from my dissertation on crime in DC at street units. My functions are in the smooth set of code. Also I change the default matplotlib theme using smooth.change_theme(). Only difference from my prior posts is I don’t have gridlines by default here (they can be a bit busy).
#################################
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
import os
import sys
mydir = r'D:\Dropbox\Dropbox\PublicCode_Git\Blog_Code\Python\Smooth'
data_loc = r'https://dl.dropbox.com/s/79ma3ldoup1bkw6/DC_CrimeData.csv?dl=0'
os.chdir(mydir)
#My functions
sys.path.append(mydir)
import smooth
smooth.change_theme()
#Dissertation dataset, can read from dropbox
#################################
Binned Conditional Plots
The first set of examples, I bin the data and estimate the conditional means and standard deviations. So here in this example I estimate E[Y | X = 0], E[Y | X = 1], etc, where Y is the total number of part 1 crimes and x is the total number of alcohol licenses on the street unit (e.g. bars, liquor stores, or conv. stores that sell beer).
The function name is mean_spike, and you pass in at a minimum the dataframe, x variable, and y variable. I by default plot the spikes as +/- 2 standard deviations, but you can set it via the mult argument.
####################
#Example binning and making mean/std dev spike plots
smooth.mean_spike(DC_crime,'TotalLic','TotalCrime')
mean_lic = smooth.mean_spike(DC_crime,'TotalLic','TotalCrime',
plot=False,ret_data=True)
####################
This example works out because licenses are just whole numbers, so it can be binned. You can pass in any X variable that can be binned in the end. So you could pass in a string for the X variable. If you don’t like the resulting format of the plot though, you can just pass plot=False,ret_data=True for arguments, and you get the aggregated data that I use to build the plots in the end.
mean_lic = smooth.mean_spike(DC_crime,'TotalLic','TotalCrime',
plot=False,ret_data=True)
Another example I am frequently interested in is proportions and confidence intervals. Here it uses exact binomial confidence intervals at the 99% confidence level. Here I clip the burglary data to 0/1 values and then estimate proportions.
####################
#Example with proportion confidence interval spike plots
DC_crime['BurgClip'] = DC_crime['OffN3'].clip(0,1)
smooth.prop_spike(DC_crime,'TotalLic','BurgClip')
####################
A few things to note about this is I clip out bins with only 1 observation in them for both of these plots. I also do not have an argument to save the plot. This is because I typically only use these for exploratory data analysis, it is pretty rare I use these plots in a final presentation or paper.
I will need to update these in the future to jitter the data slightly to be able to superimpose the original data observations. The next plots are a bit easier to show that though.
Restricted Cubic Spline Plots
Binning like I did prior works out well when you have only a few bins of data. If you have continuous inputs though it is tougher. In that case, typically what I want to do is estimate a functional relationship in a regression equation, e.g. Y ~ f(x), where f(x) is pretty flexible to identify potential non-linear relationships.
Many analysts are taught the loess linear smoother for this. But I do not like loess very much, it is often both locally too wiggly and globally too smooth in my experience, and the weighting function has no really good default.
Another popular choice is to use generalized additive model smoothers. My experience with these (in R) is better than loess, but they IMO tend to be too aggressive, and identify overly complicated functions by default.
My favorite approach to this is actually then from Frank Harrell’s regression modeling strategies. Just pick a regular set of restricted cubic splines along your data. It is arbitrary where to set the knot locations for the splines, but my experience is they are very robust (so chaning the knot locations only tends to change the estimated function form by a tiny bit).
I have class notes on restricted cubic splines I think are a nice introduction. First, I am going to make the same dataset from my class notes, the US violent crime rate from 85 through 2010.
years = pd.Series(list(range(26)))
vcr = [1881.3,
1995.2,
2036.1,
2217.6,
2299.9,
2383.6,
2318.2,
2163.7,
2089.8,
1860.9,
1557.8,
1344.2,
1268.4,
1167.4,
1062.6,
945.2,
927.5,
789.6,
734.1,
687.4,
673.1,
637.9,
613.8,
580.3,
551.8,
593.1]
yr_df = pd.DataFrame(zip(years,years+1985,vcr), columns=['y1','years','vcr'])
I have a function that allows you to append the spline basis to a dataframe. If you don’t pass in a data argument, in returns a dataframe of the basis functions.
#Can append rcs basis to dataframe
kn = [3.0,7.0,12.0,21.0]
smooth.rcs(years,knots=kn,stub='S',data=yr_df)
I also have in the code set Harrell’s suggested knot locations for the data. This ranges from 3 to 7 knots (it will through an error if you pass a number not in that range). This here suggests the locations [1.25, 8.75, 16.25, 23.75].
#If you want to use Harrell's rules to suggest knot locations
smooth.sug_knots(years,4)
Note if you have integer data here these rules don’t work out so well (can have redundant suggested knot locations). So Harell’s defaults don’t work with my alcohol license data. But it is one of the reasons I like these though, I just pick regular locations along the X data and they tend to work well. So here is a regression plot passing in those knot locations kn = [3.0,7.0,12.0,21.0] I defined a few paragraphs ago, and the plot does a few vertical guides to show the knot locations.
#RCS plot
smooth.plot_rcs(yr_df,'y1','vcr',knots=kn)
Note that the error bands in the plot are confidence intervals around the mean, not prediction intervals. One of the nice things though about this under the hood, I used statsmodels glm interface, so if you want you can change the underlying link function to Poisson (I am going back to my DC crime data here), you just pass it in the fam argument:
#Can pass in a family argument for logit/Poisson models
smooth.plot_rcs(DC_crime,'TotalLic','TotalCrime', knots=[3,7,10,15],
fam=sm.families.Poisson(), marker_size=12)
This is a really great example for the utility of splines. I will show later, but a linear Poisson model for the alcohol license effect extrapolates very poorly and ends up being explosive. Here though the larger values the conditional effect fits right into the observed data. (And I swear I did not fiddle with the knot locations, there are just what I picked out offhand to spread them out on the X axis.)
And if you want to do a logistic regression:
smooth.plot_rcs(DC_crime,'TotalLic','BurgClip', knots=[3,7,10,15],
fam=sm.families.Binomial(),marker_alpha=0)
I’m not sure how to do this in a way you can get prediction intervals (I know how to do it for Gaussian models, but not for the other glm families, prediction intervals probably don’t make sense for binomial data anyway). But one thing I could expand on in the future is to do quantile regression instead of glm models.
Smooth Plots by Group
Sometimes you want to do the smoothed regression plots with interactions per groups. I have two helper functions to do this. One is group_rcs_plot. Here I use the good old iris data to illustrate, which I will explain why in a second.
#Superimposing rcs on the same plot
smooth.group_rcs_plot(iris,'sepal_length','sepal_width',
'species',colors=None,num_knots=3)
If you pass in the num_knots argument, the knot locations are different for each subgroup of data (which I like as a default). If you pass in the knots argument and the locations, they are the same though for each subgroup.
Note that the way I estimate the models here I estimate three different models on the subsetted data frame, I do not estimate a stacked model with group interactions. So the error bands will be a bit wider than estimating the stacked model.
Sometimes superimposing many different groups is tough to visualize. So then a good option is to make a set of small multiple plots. To help with this, I’ve made a function loc_error, to pipe into seaborn’s small multiple set up:
#Small multiple example
g = sns.FacetGrid(iris, col='species',col_wrap=2)
g.map_dataframe(smooth.loc_error, x='sepal_length', y='sepal_width', num_knots=3)
g.set_axis_labels("Sepal Length", "Sepal Width")
And here you can see that the not locations are different for each subset, and this plot by default includes the original observations.
Using the Formula Interface for Plots
Finally, I’ve been experimenting a bit with using the input in a formula interface, more similar to the way ggplot in R allows you to do this. So this is a new function, plot_form, and here is an example Poisson linear model:
smooth.plot_form(data=DC_crime,x='TotalLic',y='TotalCrime',
form='TotalCrime ~ TotalLic',
fam=sm.families.Poisson(), marker_size=12)
You can see the explosive effect I talked about, which is common for Poisson/negative binomial models.
Here with the formula interface you can do other things, such as a polynomial regression:
#Can do polynomial terms
smooth.plot_form(data=DC_crime,x='TotalLic',y='TotalCrime',
form='TotalCrime ~ TotalLic + TotalLic**2 + TotalLic**3',
fam=sm.families.Poisson(), marker_size=12)
Which here ends up being almost indistinguishable from the linear terms. You can do other smoothers that are available in the patsy library as well, here are bsplines:
#Can do other smoothers
smooth.plot_form(data=DC_crime,x='TotalLic',y='TotalCrime',
form='TotalCrime ~ bs(TotalLic,df=4,degree=3)',
fam=sm.families.Poisson(), marker_size=12)
I don’t really have a good reason to prefer restricted cubic splines to bsplines, I am just more familiar with restricted cubic splines (and this plot does not illustrate the knot locations that were by default chosen, although you could pass in knot locations to the bs function).
You can also do other transformations of the x variable. So here if you take the square root of the total number of licenses helps with the explosive effect somewhat:
#Can do transforms of the X variable
smooth.plot_form(data=DC_crime,x='TotalLic',y='TotalCrime',
form='TotalCrime ~ np.sqrt(TotalLic)',
fam=sm.families.Poisson(), marker_size=12)
In the prior blog post about explosive Poisson models I also showed a broken stick type model if you wanted to log the x variable but it has zero values.
#Can do multiple transforms of the X variable
smooth.plot_form(data=DC_crime,x='TotalLic',y='TotalCrime',
form='TotalCrime ~ np.log(TotalLic.clip(1)) + I(TotalLic==0)',
fam=sm.families.Poisson(), marker_size=12)
Technically this “works” if you transform the Y variable as well, but the resulting plot is misleading, and the prediction interval is for the transformed variable. E.g. if you pass a formula 'np.log(TotalCrime+1) ~ TotalLic', you would need to exponentiate the the predictions and subtract 1 to get back to the original scale (and then the line won’t be the mean anymore, but the confidence intervals are OK).
I will need to see if I can figure out patsy and sympy to be able to do the inverse transformation to even do that. That type of transform to the y variable directly probably only makes sense for linear models, and then I would also maybe need to do a Duan type smearing estimate to get the mean effect right.
Notes on making scatterplots in matplotlib and seaborn
Many of my programming tips, like my notes for making Leaflet maps in R or margins plots in Stata, I’ve just accumulated doing projects over the years. My current workplace is a python shop though, so I am figuring it out all over for some of these things in python. I made some ugly scatterplots for a presentation the other day, and figured it would be time to spend alittle time making some notes on making them a bit nicer.
For prior python graphing post examples, I have:
For this post, I am going to use the same data I illustrated with SPSS previously, a set of crime rates in Appalachian counties. Here you can download the dataset and the python script to follow along.
Making scatterplots using matplotlib
So first for the upfront junk, I load my libraries, change my directory, update my plot theme, and then load my data into a dataframe crime_dat. I technically do not use numpy in this script, but soon as I take it out I’m guaranteed to need to use np. for something!
################################################################
import pandas as pd
import numpy as np
import os
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
my_dir = r'C:\Users\andre\OneDrive\Desktop\big_scatter'
os.chdir(my_dir)
andy_theme = {'axes.grid': True,
'grid.linestyle': '--',
'legend.framealpha': 1,
'legend.facecolor': 'white',
'legend.fontsize': 14,
'legend.title_fontsize': 16,
'xtick.labelsize': 14,
'ytick.labelsize': 14,
'axes.labelsize': 16,
'axes.titlesize': 20,
'figure.dpi': 100}
matplotlib.rcParams.update(andy_theme)
################################################################
First, lets start from the base scatterplot. After defining my figure and axis objects, I add on the ax.scatter by pointing the x and y’s to my pandas dataframe columns, here Burglary and Robbery rates per 100k. You could also instead of starting from the matplotlib objects start from the pandas dataframe methods (as I did in my prior histogram post). I don’t have a good reason for using one or the other.
Then I set the axis grid lines to be below my points (is there a way to set this as a default?), and then I set my X and Y axis labels to be nicer than the default names.
################################################################
#Default scatterplot
fig, ax = plt.subplots(figsize=(6,4))
ax.scatter(crime_dat['burg_rate'], crime_dat['rob_rate'])
ax.set_axisbelow(True)
ax.set_xlabel('Burglary Rate per 100,000')
ax.set_ylabel('Robbery Rate per 100,000')
plt.savefig('Scatter01.png', dpi=500, bbox_inches='tight')
plt.show()
################################################################
You can see here the default point markers, just solid blue filled circles with no outline, when you get a very dense scatterplot just looks like a solid blob. I think a better default for scatterplots is to plot points with an outline. Here I also make the interior fill slightly transparent. All of this action is going on in the ax.scatter call, all of the other lines are the same.
################################################################
#Making points have an outline and interior fill
fig, ax = plt.subplots(figsize=(6,4))
ax.scatter(crime_dat['burg_rate'], crime_dat['rob_rate'],
c='grey', edgecolor='k', alpha=0.5)
ax.set_axisbelow(True)
ax.set_xlabel('Burglary Rate per 100,000')
ax.set_ylabel('Robbery Rate per 100,000')
plt.savefig('Scatter02.png', dpi=500, bbox_inches='tight')
plt.show()
################################################################
So that is better, but we still have quite a bit of overplotting going on. Another quick trick is to make the points smaller and up the transparency by setting alpha to a lower value. This allows you to further visualize the density, but then makes it a bit harder to see individual points – if you started from here you might miss that outlier in the upper right.
Note I don’t set the edgecolor here, but if you want to make the edges semitransparent as well you could do edgecolor=(0.0, 0.0, 0.0, 0.5), where the last number of is the alpha transparency tuner.
################################################################
#Making the points small and semi-transparent
fig, ax = plt.subplots(figsize=(6,4))
ax.scatter(crime_dat['burg_rate'], crime_dat['rob_rate'], c='k',
alpha=0.1, s=4)
ax.set_axisbelow(True)
ax.set_xlabel('Burglary Rate per 100,000')
ax.set_ylabel('Robbery Rate per 100,000')
plt.savefig('Scatter03.png', dpi=500, bbox_inches='tight')
plt.show()
################################################################
This dataset has around 7.5k rows in it. For most datasets of anymore than a hundred points, you often have severe overplotting like you do here. One way to solve that problem is to bin observations, and then make a graph showing the counts within the bins. Matplotlib has a very nice hexbin method for doing this, which is easier to show than explain.
################################################################
#Making a hexbin plot
fig, ax = plt.subplots(figsize=(6,4))
hb = ax.hexbin(crime_dat['burg_rate'], crime_dat['rob_rate'],
gridsize=20, edgecolors='grey',
cmap='inferno', mincnt=1)
ax.set_axisbelow(True)
ax.set_xlabel('Burglary Rate per 100,000')
ax.set_ylabel('Robbery Rate per 100,000')
cb = fig.colorbar(hb, ax=ax)
plt.savefig('Scatter04.png', dpi=500, bbox_inches='tight')
plt.show()
################################################################
So for the hexbins I like using the mincnt=1 option, as it clearly shows areas with no points, but then you can still spot the outliers fairly easy. (Using white for the edge colors looks nice as well.)
You may be asking, what is up with that outlier in the top right? It ends up being Letcher county in Kentucky in 1983, which had a UCR population estimate of only 1522, but had a total of 136 burglaries and 7 robberies. This could technically be correct (only some local one cop town reported, and that town does not cover the whole county), but I’m wondering if this is a UCR reporting snafu.
It is also a good use case for funnel charts. I debated on making some notes here about putting in text labels, but will hold off for now. You can add in text by using ax.annotate fairly easy by hand, but it is hard to automate text label positions. It is maybe easier to make interactive graphs and have a tooltip, but that will need to be another blog post as well.
Making scatterplots using seaborn
The further examples I show are using the seaborn library, imported earlier as sns. I like using seaborn to make small multiple plots, but it also has a very nice 2d kernel density contour plot method I am showing off.
Note this does something fundamentally different than the prior hexbin chart, it creates a density estimate. Here it looks pretty but creates a density estimate in areas that are not possible, negative crime rates. (There are ways to prevent this, such as estimating the KDE on a transformed scale and retransforming back, or reflecting the density back inside the plot would probably make more sense here, ala edge weighting in spatial statistics.)
Here the only other things to note are used filled contours instead of just the lines, I also drop the lowest shaded area (I wish I could just drop areas of zero density, note dropping the lowest area drops my outlier in the top right). Also I have a tough go of using the default bandwidth estimators, so I input my own.
################################################################
#Making a contour plot using seaborn
g = sns.kdeplot(crime_dat['burg_rate'], crime_dat['rob_rate'],
g.set_axisbelow(True)
g.set_xlabel('Burglary Rate per 100,000')
g.set_ylabel('Robbery Rate per 100,000')
plt.savefig('Scatter05.png', dpi=500, bbox_inches='tight')
plt.show()
################################################################
So far I have not talked about the actual marker types. It is very difficult to visualize different markers in a scatterplot unless they are clearly separated. So although it works out OK for the Iris dataset because it is small N and the species are clearly separated, in real life datasets it tends to be much messier.
So I very rarely use multiple point types to symbolize different groups in a scatterplot, but prefer to use small multiple graphs. Here is an example of turning my original scatterplot, but differentiating between different county areas in the dataset. It is a pretty straightforward update using sns.FacetGrid to define the group, and then using g.map. (There is probably a smarter way to set the grid lines below the points for each subplot than the loop.)
################################################################
#Making a small multiple scatterplot using seaborn
g = sns.FacetGrid(data=crime_dat, col='subrgn',
col_wrap=2, despine=False, height=4)
g.map(plt.scatter, 'burg_rate', 'rob_rate', color='grey',
s=12, edgecolor='k', alpha=0.5)
g.set_titles("{col_name}")
for a in g.axes:
a.set_axisbelow(True)
g.set_xlabels('Burglary Rate per 100,000')
g.set_ylabels('Robbery Rate per 100,000')
plt.savefig('Scatter06.png', dpi=500, bbox_inches='tight')
plt.show()
################################################################
And then finally I show an example of making a small multiple hexbin plot. It is alittle tricky, but this is an example in the seaborn docs of writing your own sub-plot function and passing that.
To make this work, you need to pass an extent for each subplot (so the hexagons are not expanded/shrunk in any particular subplot). You also need to pass a vmin/vmax argument, so the color scales are consistent for each subplot. Then finally to add in the color bar I just fiddled with adding an axes. (Again there is probably a smarter way to scoop up the plot coordinates for the last plot, but here I just experimented till it looked about right.)
################################################################
#Making a small multiple hexbin plot using seaborn
#https://stackoverflow.com/a/31385996/604456
def loc_hexbin(x, y, **kwargs):
kwargs.pop("color", None)
plt.hexbin(x, y, gridsize=20, edgecolor='grey',
cmap='inferno', mincnt=1,
vmin=1, vmax=700, **kwargs)
g = sns.FacetGrid(data=crime_dat, col='subrgn',
col_wrap=2, despine=False, height=4)
g.map(loc_hexbin, 'burg_rate', 'rob_rate',
edgecolors='grey', extent=[0, 9000, 0, 500])
g.set_titles("{col_name}")
for a in g.axes:
a.set_axisbelow(True)
#This goes x,y,width,height
cax = g.fig.add_axes([0.55, 0.09, 0.03, .384])
plt.colorbar(cax=cax, ax=g.axes[0])
g.set_xlabels('Burglary Rate per 100,000')
g.set_ylabels('Robbery Rate per 100,000')
plt.savefig('Scatter07.png', dpi=500, bbox_inches='tight')
plt.show()
################################################################
Another common task with scatterplots is to visualize a smoother, e.g. E[Y|X] the expected mean of Y conditional on X, or you could do any other quantile, etc. That will have to be another post though, but for examples I have written about previously I have jittering 0/1 data, and visually weighted regression.
Jittered scatterplots with 0-1 data
Scatterplots with discrete variables and many observations take some touches beyond the defaults to make them useful. Consider the case of a categorical outcome that can only take two values, 0 and 1. What happens when we plot this data against a continuous covariate with my default chart template in SPSS?
Oh boy, that is not helpful. Here is the fake data I made and the GGRAPH code to make said chart.
*Inverse logit - see.
*https://andrewpwheeler.wordpress.com/2013/06/25/an-example-of-using-a-macro-to-make-a-custom-data-transformation-function-in-spss/.
DEFINE !INVLOGIT (!POSITIONAL !ENCLOSE("(",")") )
1/(1 + EXP(-!1))
!ENDDEFINE.
SET SEED 5.
INPUT PROGRAM.
LOOP #i = 1 TO 1000.
COMPUTE X = RV.UNIFORM(0,1).
DO IF X <= 0.2.
COMPUTE YLin = -0.5 + 0.3*(X-0.1) - 4*((X-0.1)**2).
ELSE IF X > 0.2 AND X < 0.8.
COMPUTE YLin = 0 - 0.2*(X-0.5) + 2*((X-0.5)**2) - 4*((X-0.5)**3).
ELSE.
COMPUTE YLin = 3 + 3*(X - 0.9).
END IF.
COMPUTE #YLin = !INVLOGIT(YLin).
COMPUTE Y = RV.BERNOULLI(#YLin).
END CASE.
END LOOP.
END FILE.
END INPUT PROGRAM.
DATASET NAME NonLinLogit.
FORMATS Y (F1.0) X (F2.1).
*Original chart.
GGRAPH
/GRAPHDATASET NAME="graphdataset" VARIABLES=X Y
/GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
SOURCE: s=userSource(id("graphdataset"))
DATA: X=col(source(s), name("X"))
DATA: Y=col(source(s), name("Y"))
GUIDE: axis(dim(1), label("X"))
GUIDE: axis(dim(2), label("Y"))
ELEMENT: point(position(X*Y))
END GPL.
So here we will do a few things to the chart to make it easier to interpret:
SPSS can jitter the points directly within GGRAPH code (see point.jitter), but here I jitter the data slightly myself a uniform amount. The extra aesthetic options for making points smaller and semi-transparent are at the end of the ELEMENT statement.
*Making a jittered chart.
COMPUTE YJitt = RV.UNIFORM(-0.04,0.04) + Y.
FORMATS Y YJitt (F1.0).
GGRAPH
/GRAPHDATASET NAME="graphdataset" VARIABLES=X Y YJitt
/GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
SOURCE: s=userSource(id("graphdataset"))
DATA: X=col(source(s), name("X"))
DATA: Y=col(source(s), name("Y"))
DATA: YJitt=col(source(s), name("YJitt"))
GUIDE: axis(dim(1), label("X"))
GUIDE: axis(dim(2), label("Y"), delta(1), start(0))
SCALE: linear(dim(2), min(-0.05), max(1.05))
ELEMENT: point(position(X*YJitt), size(size."3"),
transparency.exterior(transparency."0.7"))
END GPL.
If I made the Y axis categorical I would need to use point.jitter in the inline GPL code because SPSS will always force the categories to the same spot on the axis. But since I draw the Y axis as continuous here I can do the jittering myself.
A useful tool for exploratory data analysis is to add a smoothing term to plot – a local estimate of the mean at different locations of the X-axis. No binning necessary, here is an example using loess right within the GGRAPH call. The red line is the smoother, and the blue line is the actual proportion I generated the fake data from. It does a pretty good job of identifying the discontinuity at 0.8, but the change points earlier are not visible. Loess was originally meant for continuous data, but for exploratory analysis it works just fine on the 0-1 data here. See also smooth.mean for 0-1 data.
*Now adding in a smoother term.
COMPUTE ActualFunct = !INVLOGIT(YLin).
FORMATS Y YJitt ActualFunct (F2.1).
GGRAPH
/GRAPHDATASET NAME="graphdataset" VARIABLES=X Y YJitt ActualFunct
/GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
SOURCE: s=userSource(id("graphdataset"))
DATA: X=col(source(s), name("X"))
DATA: Y=col(source(s), name("Y"))
DATA: YJitt=col(source(s), name("YJitt"))
DATA: ActualFunct=col(source(s), name("ActualFunct"))
GUIDE: axis(dim(1), label("X"))
GUIDE: axis(dim(2), label("Y"), delta(0.2), start(0))
SCALE: linear(dim(2), min(-0.05), max(1.05))
ELEMENT: point(position(X*YJitt), size(size."3"),
transparency.exterior(transparency."0.7"))
ELEMENT: line(position(smooth.loess(X*Y, proportion(0.2))), color(color.red))
ELEMENT: line(position(X*ActualFunct), color(color.blue))
END GPL.
SPSS’s default smoothing is alittle too smoothed for my taste, so I set the proportion of the X variable to use in estimating the mean within the position statement.
I wish SPSS had the ability to draw error bars around the smoothed means (you can draw them around the linear regression lines with quadratic or cubic polynomial terms, but not around the local estimates like smooth.loess or smooth.mean). I realize they are not well defined and rarely have coverage properties of typical regression estimators – but I rather have some idea about the error than no idea. Here is an example using the ggplot2 library in R. Of course we can work the magic right within SPSS.
BEGIN PROGRAM R.
#Grab Data
casedata <- spssdata.GetDataFromSPSS(variables=c("Y","X"))
#ggplot smoothed version
library(ggplot2)
library(splines)
MyPlot <- ggplot(aes(x = X, y = Y), data = casedata) +
geom_jitter(position = position_jitter(height = .04, width = 0), alpha = 0.1, size = 2) +
stat_smooth(method="glm", family="binomial", formula = y ~ ns(x,5))
MyPlot
END PROGRAM.
To accomplish the same thing in SPSS you can estimate restricted cubic splines and then use any applicable regression procedure (e.g. LOGISTIC, GENLIN) and save the predicted values and confidence intervals. It is pretty easy to call the R code though!
I haven’t explored the automatic linear modelling, so let me know in the comments if there is a simply way right in SPSS to get explore such non-linear predictions. | 2020-12-05 05:06:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48258230090141296, "perplexity": 1819.12452675242}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746320.91/warc/CC-MAIN-20201205044004-20201205074004-00268.warc.gz"} |
https://www.numerade.com/questions/naep-scores-young-people-have-a-better-chance-of-full-time-employment-and-good-wages-if-they-are-goo/ | 🚨 Hurry, space in our FREE summer bootcamps is running out. 🚨Claim your spot here.
# NAEP scores Young people have a better chance of full-time employment and good wages if they aregood with numbers. How strong are the quantitative skills of young Americans of working age? One source of data is the National Assessment of Educational Progress (NAEP) Young Adult Literacy Assessment Survey, which is based on a nationwide probability sample of households. The NAEP survey includes a short test of quantitative skills, covering mainly basic arithmetic and the ability to apply it to realistic problems. Scores on the test range from 0 to 500. For example, a person who scores 233 can add the amounts of two checks appearing on a bank deposit slip; someone scoring 325 can determine the price of a meal from a menu; a person scoring 375 can transform a price in cents per ounce into dollars per pound. $^{4}$ Suppose that you give the NAEP test to an SRS of840 people from a large population in which the scores have mean 280 and standard deviation S 60. The mean $\overline{x}$ of the 840 scores will vary if you take repeated samples.(a) Describe the shape, center, and spread of the sampling distribution of $\overline{x} .$(b) Sketch the sampling distribution of $\overline{x}$ . Mark its mean and the values one, two, and three standard deviations on either side of the mean.(c) According to the $68-95-99.7$ rule, about 95$\%$ of all values of $\overline{x}$ lie within a distance $m$ of the mean of the sampling distribution. What is $m ?$ Shade the region on the axis of your sketch that is within $m$ of the mean.(d) Whenever $\overline{x}$ falls in the region you shaded, the population mean $\mu$ lies in the confidence interval $\overline{x} \pm m .$ For what percent of all possible samples does the interval capture $\mu$ ?
## a. Approximately normal with mean 280 and standard deviation 2.0702b. see drawingc. $m=4.140$d. 95$\%$
### Discussion
You must be signed in to discuss. | 2021-06-15 19:52:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4793214201927185, "perplexity": 1098.7989316121543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621519.32/warc/CC-MAIN-20210615180356-20210615210356-00576.warc.gz"} |
https://www.nengo.ai/nengo-loihi/v1.0.0/tips.html | # Tips and tricks¶
## Making models fit on Loihi¶
### Splitting large Ensembles¶
By default, NengoLoihi will split Ensemble objects that are too large to fit on a single Loihi core into smaller pieces to distribute across multiple cores. For some networks (e.g. most densely-connected networks), this can happen by itself without any guidance from the user.
For networks that use nengo.Convolution transforms, such as image processing networks, some assistance is usually required to tell NengoLoihi how to split an ensemble. This is because grouping the neurons sequentially is rarely ideal for such networks. For example, if an Ensemble is representing a 32 x 32 x 4 image (that is 32 rows, 32 columns, and 4 channels), we might want to split that ensemble into four 32 x 32 x 1 groups, or four 16 x 16 x 4 groups. In the first case, each Loihi core will contain information from all spatial locations in the image, but each will only contain one of the channels. In the second case, each Loihi core will represent a different spacial quadrant of the image (i.e., top-left, top-right, bottom-left, bottom-right), but each will contain all channels for its respective location. In neither case, though, will all cores contain solely consecutive pixels, assuming our pixel array is ordered by rows, then columns, then channels. Since the default behaviour of the system is to split into consecutive groups, we need to override that behaviour.
To do this, we use the BlockShape class with the block_shape configuration option:
import numpy as np
image_shape = (32, 32, 4)
with nengo.Network() as net:
# first case: splitting across channels
ens1 = nengo.Ensemble(np.prod(image_shape), 1)
net.config[ens1].block_shape = nengo_loihi.BlockShape((32, 32, 1), image_shape)
# second case: splitting spatially
ens2 = nengo.Ensemble(np.prod(image_shape), 1)
net.config[ens2].block_shape = nengo_loihi.BlockShape((16, 16, 4), image_shape)
We are not limited to splitting only along the spatial or channel axes. For example, with a 32 x 32 x 4 image we could choose a block shape of 16 x 16 x 2, which would result in 8 cores tiling the image both in the spatial and channel dimensions. We could also use shapes that are uneven in the spatial dimensions, for example 16 x 32 x 2.
Furthermore, the block shape does not have to fit evenly into the image shape in all (or even any) of the dimensions. For example, with a 4 x 4 image, we could choose a block shape of 3 x 3; this would result in 4 blocks: a 3 x 3 block for the top-left of the image, a 3 x 1 block for the top-right, a 1 x 3 block for the bottom-left, and a 1 x 1 block for the bottom-right. In this case, it would be better to use a 2 x 2 block shape, which also results in 4 blocks, but uses resources more equally across all cores. (This assumes that resource constraints are preventing us from using e.g. a 2 x 4 or 4 x 4 block shape that would simply use fewer cores.)
The constraints on BlockShape are that each block has to fit on one Loihi core. The most basic resource limitation is that the number of neurons (the product of the shape), must be less than or equal to 1024 (the maximum number of neurons per core). Our two original block shapes of 32 x 32 x 1 and 16 x 16 x 4 both equal exactly 1024 neurons per core. However, there are other limiting resources on Loihi cores, such as the numbers of input and output axons, and the amount of synapse memory. We therefore may not always be able to use block shapes that fully utilize the number of compartments, if other resources are in short supply.
### Measuring utilization of chip resources¶
The Model.utilization_summary command can be used to get more information on the resources used by each block (i.e. Loihi core). This can help you to judge whether cores are being optimally utilized. When using this command, it is best to give all your ensembles unique labels (like nengo.Ensemble(..., label="my_ensemble")); these names will show up in the summary, allowing you to identify problematic blocks.
### Reducing axons by changing pop_type¶
The pop_type configuration option can be used to set the type of population axons used on convolutional connections. Setting this to 16 instead of 32 (the default) reduces the number of axons required by the model, but also adds some restrictions on how convolutional connections are set up. See add_params for more details.
### Reducing synapses and axons by changing block shape¶
In networks with convolutional connections, an inefficient parameterization can cause some connection weights to be copied up to four times (to work around limitations when mapping these connections onto Loihi). If you are running out of synapse memory or axons on convolutional connections, use the following guidelines to see whether restructuring could help.
First, it is important to understand the difference between the spatial dimensions of a shape and the channel dimension. The channel dimension will be the first dimension of the shape if channels_last=False or the last dimension if channels_last=True. All other dimensions of the shape will be part of the spatial shape. For example, if channels_last=True and we have the shape 32 x 32 x 4, the spatial shape is 32 x 32, and the spatial size is 32 * 32 = 1024.
When choosing our block shape, we can trade off between spatial size and channel size. For example, if our image shape is 32 x 32 x 8, an Ensemble representing this will need at least 8 cores (since 32 * 32 * 8 / 1024 == 8). Two potential block shapes to achieve these 8 cores are 32 x 32 x 1, which has a spatial size of 1024, and 16 x 16 x 4, which has a spatial size of 256. If we wish to reduce the spatial size of our block shape, decreasing the spatial size per block while simultaneously increasing the number of channels per block will often let us keep the same number of neurons per core, but decrease other resources such as synapse memory or axon usage.
If the problem is output axon usage, try to increase the channel size of the blocks on any Ensemble targeted by connections from the problematic Ensemble. Axons can be reused across channels, so the more channels per core, the fewer output axons are required to send the information to all target cores.
If the problem is input axon usage, try to reduce the spatial size of the problematic Ensemble. Again, since axons are reused across channels, changing the channel size will have no effect (potentially, it can even be increased to compensate for the drop in spatial size, and keep the same number of compartments per core).
If the problem is synapse memory usage, then the problem is caused by incoming Connections to the problematic Ensemble. The solution depends on the value of channels_last on the Convolution transform, and the value of pop_type on the Connection (if you have not set pop_type, 32 is the default value). The following can be applied to any or all of the incoming Connections:
• If the connection is using channels_last=False and pop_type=32, extra weights are created if the spatial size is greater than 256 (the factor by which the size of the weights is multiplied is approximately the spatial size divided by 256). Decrease the spatial size.
• If the connection is using channels_last=False and pop_type=16, extra weights are always created. Consider using channels_last=True, or not using pop_type=16 if you are using less than 50% of the available axons.
• If the connection is using channels_last=True and pop_type=32, extra weights are created if there are more than 256 neurons per core. Consider using channels_last=False.
• If the connection is using channels_last=True and pop_type=16, extra weights are created if the number of channels per block is not a multiple of 4, and if there are more than 256 neurons per core. Consider making the channels per block a multiple of 4.
In all cases, decreasing the number of channels per block will decrease the amount of synapse memory used, since there is one set of weights per channel.
## Local machine¶
### SSH hosts¶
Adding ssh hosts to your SSH configuration will make working with remote superhosts, hosts, and boards much quicker and easier. After setting them up, you will be able to connect to any machine through a single ssh <machine> command.
To begin, make a ~/.ssh/config file.
touch ~/.ssh/config
Then open that file in a text editor and add a Host entry for each machine that you want to interact with remotely.
Typically machines that you can connect to directly will have a configuration like this:
Host <short name>
HostName <host name or IP address>
For security, the port on which ssh connections are accepted is often changed. To specify a port, add the following to the Host entry.
Host <short name>
...
Port 1234
Finally, many machines (especially hosts and boards) are not accessible through the open internet and must instead be accessed through another machine, like a superhost. To access these with one command, add the following to the Host entry. <tunnel short name> refers to the <short name> of the Host entry through which you access the machine (e.g., the <host short name> entry uses the superhost’s short name for <tunnel short name>).
Host <short name>
...
ProxyCommand ssh <tunnel short name> -W %h:%p
Once host entries are defined, you can access those machine with:
ssh <short name>
You can also use the short name in rsync, scp, and other commands that use ssh under the hood.
For more details and options, see this tutorial.
We recommend that Loihi system administrators make specific host entries for their system available to all users.
### SSH keys¶
SSH keys allow you to log in to remote machines without providing your password. This is especially useful when accessing a board through a host and superhost, each of which require authentication.
You may already have created an SSH key for another purpose. By default, SSH keys are stored as
• ~/.ssh/id_rsa (private key)
• ~/.ssh/id_rsa.pub (public key)
If these files exist when you do ls ~/.ssh, then you already have an SSH key.
If you do not have an SSH key, you can create one with
ssh-keygen
Follow the prompts, using the default values when unsure. We recommend setting a passphrase in case someone obtains your SSH key pair.
Once you have an SSH key pair, you will copy your public key to each machine you want to log into without a password.
ssh-copy-id <host short name>
<host short name> is the name you specified in your SSH config file for that host (e.g., ssh-copy-id loihi-host). You will be prompted for your password in order to copy the key. Once it is copied, try ssh <host short name> to confirm that you can log in without providing a password.
### Remote port tunneling¶
Tunneling a remote port to your local machine allows you to run the Jupyter notebook server or the NengoGUI server on the superhost or host, but access the web-based interface on your local machine.
To do this, we will create a new terminal window on the local machine that we will keep open while the tunnel is active. In this terminal, do
ssh -L <local port>:localhost:<remote port>
You will then enter an SSH session in which you can start the process that will communicate over <remote port>.
Example 1: Starting a NengoGUI server on port 8000 of superhost-1, which has a loihi conda environment.
# In a new terminal window on your local machine
ssh -L 8000:localhost:8000 superhost-1
# We are now on superhost-1
source activate loihi
cd ~/nengo-loihi/docs/examples
nengo --port 8000 --no-browser --auto-shutdown 0 --backend nengo_loihi
On your local machine, open http://localhost:8000/ and you should see the NengoGUI interface.
Example 2: Starting a Jupyter notebook server on port 8080 of superhost-2, which has a loihi virtualenv environment.
# In a new terminal window on your local machine
ssh -L 8080:localhost:8080 superhost-2
# We are now on superhost-2
workon loihi
cd ~/nengo-loihi/docs/examples
jupyter notebook --no-browser --port 8080
The jupyter command should print out a URL of the form http://localhost:8888/?token=<long-strong>, which you can open on your local machine.
### Syncing with rsync¶
If you work on your local machine and push changes to multiple remote superhosts, it is worth spending some time to set up a robust solution for syncing files between your local machine and the superhosts.
rsync is a good option because it is fast (it detects what has changed and only sends changes) and can be configured to ensure that the files on your local machine are the canonical files and are not overwritten by changes made on remotes. rsync also uses SSH under the hood, so the SSH hosts you set up previously can be used.
rsync is available from most package managers (e.g. apt, brew) and in many cases will already be installed on your system.
The basic command that is most useful is
rsync -rtuv --exclude=*.pyc /src/folder /dst/folder
• -r recurses into subdirectories
• -t copies and updates file modifications times
• -u replaces files with the most up-to-date version as determined by modification time
• -v adds more console output to see what has changed
• --exclude=*.pyc ensures that *.pyc files are not copied
When sending files to a remote host, you may also want to use the --delete option to delete files in the destination folder that have been removed from the source folder.
To simplify rsync usage, you can make small bash functions to make your workflow explicit.
For example, the following bash functions will sync the NxSDK and nengo-loihi folders between the local machine and the user’s home directory on host-1. In this example, the --delete flag is only used on pushing so that files are never deleted from the local machine. The --exclude=*.pyc flag is only used for nengo-loihi because *.pyc files are an important part of the NxSDK source tree. These and other options can be adapted based on your personal workflow.
LOIHI="/path/to/nengo-loihi/"
NXSDK="/path/to/NxSDK/"
push_host1() {
rsync -rtuv --exclude=*.pyc --delete "$LOIHI" "host-1:nengo-loihi" rsync -rtuv --delete "$NXSDK" "host-1:NxSDK"
}
pull_host1() {
rsync -rtuv --exclude=*.pyc "host-1:nengo-loihi/" "$LOIHI" rsync -rtuv "host-1:NxSDK" "$NXSDK"
}
These functions are placed in the ~/.bashrc file and executed at a terminal with
push_host1
pull_host1
### Remote editing with SSHFS¶
If you primarily work with a single remote superhost, SSHFS is a good option that allows you to mount a remote filesystem to your local machine, meaning that you manipulate files as you normally would on your local machine, but those files will actually exist on the remote machine. SSHFS ensures that change you make locally are efficiently sent to the remote.
SSHFS is available from most package managers, including apt and brew.
To mount a remote directory to your local machine, create a directory to mount to, then call sshfs to mount it.
mkdir -p <mount point>
sshfs -o allow_other,defer_permissions <host short name>:<remote directory> <mount point>
When you are done using the remote files, unmount the mount point.
fusermount -u <mount point>
Note
If fusermount is not available and you have sudo access, you can also unmount with
sudo umount <mount point>
As with rsync, since you may do these commands frequently, it can save time to make a short bash function. The following example functions mount and unmount the host-2 ~/loihi directory to the local machine’s ~/remote/host-2 directory.
mount_host2() {
mkdir -p ~/remote/host-2
sshfs host-2:loihi ~/remote/host-2
}
unmount_host2() {
fusermount -u ~/remote/host-2
}
## Superhost¶
### Plotting¶
If you are generating plots with Matplotlib on the superhost or host, you may run into issues due to there being no monitor attached to those machines (i.e., they are “headless”). Rather than plotting to a screen, you can instead save plots as files with plt.savefig. You will also need to configure Matplotlib to use a headless backend by default.
The easiest way to do this is with a matplotlibrc file.
mkdir -p ~/.config/matplotlib
echo "backend: Agg" >> ~/.config/matplotlib/matplotlibrc
### IPython / Jupyter¶
If you want to use the IPython interpreter or the Jupyter notebook on a superhost (e.g., the INRC superhost), you may run into issues due to the network file system (NFS), which does not work well with how IPython and Jupyter track command history. You can configure IPython and Jupyter to instead store command history to memory only.
To do this, start by generating the configuration files.
jupyter notebook --generate-config
ipython profile create
Then add a line to three files to configure the command history for NFS.
echo "c.NotebookNotary.db_file = ':memory:'" >> ~/.jupyter/jupyter_notebook_config.py
echo "c.HistoryAccessor.hist_file = ':memory:'" >> ~/.ipython/profile_default/ipython_config.py
echo "c.HistoryAccessor.hist_file = ':memory:'" >> ~/.ipython/profile_default/ipython_kernel_config.py
### Slurm cheatsheet¶
Most Loihi superhosts use Slurm to schedule and distribute jobs to Loihi hosts. Below are the commands that Slurm makes available and what they do.
sinfo
Check the status (availability) of connected hosts.
squeue
Check the status of your jobs.
scancel <jobid>
scancel --user=<username>
sudo scontrol update nodename="<nodename>" state="idle"
Mark a Loihi host as “idle”, which places it in the pool of available hosts to be used. Use this when a Loihi host that was down comes back up.
Note
This should only be done by a system administrator.
### Use Slurm by default¶
Most superhosts use Slurm to run models on the host. Normally you can opt in to executing a command with
SLURM=1 my-command
However, you will usually want to use Slurm, so to switch to an opt-out setup, open your shell configuration file in a text editor (usually ~/.bashrc), and add the following line to the end of the file.
export SLURM=1
Once making this change you can opt out of using Slurm by executing a command with
SLURM=0 my-command
### Running large models¶
Normally you do not need to do anything other than setting the SLURM environment variable to run a model on Slurm. However, in some situation Slurm may kill your job due to long run times or other factors.
Custom Slurm partitions can be used to run your job with different sets of restrictions. Your system administrator will have to set up the partition. You can see a list of all partitions and nodes with sinfo.
To run a job with the loihiinf partition, set the environment variable PARTITION. For example, you can run bigmodel.py using this partition with
PARTITION=loihiinf python bigmodel.py
Similarly, if you wish to use a particular board (called a “node” in Slurm), set the BOARD environment variable. For example, to run model.py on the loihimh board, do
BOARD=loihimh python model.py | 2022-05-22 18:03:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23483924567699432, "perplexity": 2600.087917303536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00056.warc.gz"} |
https://proxies123.com/tag/variance/ | ## st.statistics: minimizes the asymptotic variance of an ergodic average subject to a set of restrictions
Leave
• $$(E, E math, lambda)$$ Y $$(E & # 39 ;, mathcal E & # 39 ;, lambda & # 39;)$$ be measuring spaces
• $$I$$ be a finite set not empty
• $$varphi_i: E & # 39; a E$$ be $$( mathcal E & # 39 ;, mathcal E)$$-measurable with $$lambda & # 39; circ varphi_i ^ {- 1} = q_i lambda tag1$$ for $$i in I$$
• $$p, q_i: E to (0, infty)$$ be $$mathcal E$$-measurable with $$int p : { rm d} lambda = int q_i : { rm d} lambda = 1$$
• $$w_i: E a (0,1)$$ be $$mathcal E$$-measurable, $$w_i & # 39 ;: = w_i circ varphi_i$$, $$p_i & # 39 ;: = begin {cases} frac p {q_i} circ varphi_i & text {on} left {q_i circ varphi_i> 0 right } \ 0 & text {on} left {q_i circ varphi_i = 0 right } end {cases}$$ Y $$f_i & # 39 ;: = begin {cases} frac f {q_i} circ varphi_i & text {on} left {q_i circ varphi_i> 0 right } \ 0 & text {on} left {q_i circ varphi_i = 0 right } end {cases}$$ for $$i in I$$
• $$zeta$$ denote the counting measure in $$(I, 2 ^ I)$$ Y $$nu & # 39 ;: = w & # 39; p & # 39; ( zeta otimes lambda & # 39;)$$
Leave $$f in L ^ 2 ( lambda)$$. Assume $${q_i = 0 } subseteq {w_ip = 0 }, tag2$$ $${p = 0 } subseteq {f = 0 } tag3$$ Y $${pf ne0 } subseteq left { sum_ {i in I} w_i = 1 right }. tag4$$
Leave $$((T_n, X_n & # 39;)) _ {n in mathbb N}$$ be the Markov chain (supposed to be in stationarity) generated by the Metropolis-Hastings algorithm with objective distribution $$nu & # 39;$$ Y $$A_n: = frac1n sum_ {i = 0} frac {f & # 39;} {p & # 39;} (T_i, X_i & # 39;) ; ; ; text {for} n in mathbb N.$$ I want to minimize asymptotic variance $$sigma ^ 2: = lim_ {n to infty} n operatorname {Var} A_n$$ With respect to $$w_i$$. How can we do that?
I know that yes $$(Y_n) _ {n in mathbb N_0}$$ is any homogeneous Markov chain in time, $$mu: = math L (Y_0)$$, $$g in L ^ 2 ( mu)$$ Y $$B_n: = frac1n sum_ {i = 0} ^ {n-1} g (Y_i)$$, so $$operatorname {Var} B_n = frac1n left ( operatorname {Var} _ mu (g) +2 sum_ {i = 1} ^ {n-1} left (1- frac in right) operatorname {Cov} (f (Y_0), f (Y_i)) right)$$. Furthermore, if $$L ^ 2_0 ( mu): = left {h in L ^ 2 ( mu): int h : { rm d} mu = 0 right }$$, $$mathcal D (G): = left {h_0 in L ^ 2_0 ( mu): left ( sum_ {i = 0} ^ n kappa ^ ih_0 right) _ {n in mathbb N_0} text {is convergent} right }$$, $$Gh_0: = sum_ {n = 0} ^ infty kappa ^ nh_0 ; ; ; text {for} h_0 in mathcal D (G),$$ Y $$g_0: = g- int g : { rm d} mu in mathcal D (G)$$, so $$n operatorname {Var} B_n xrightarrow {n to infty} 2 langle Gg_0, g_0 rangle_ {L ^ 2 ( mu)} – operatorname {Var} _ mu (g) tag5.$$ In particular, leaving $$mathcal L: = – (1- kappa)$$, we can consider the spectral gap of $$math L$$, $$operatorname {gap} mathcal L = inf _ { substack {h in L ^ 2 ( mu) setminus {0 } \ 1 : perp : h}} frac { langle- mathcal Lh, h rangle_ {L ^ 2 ( mu)}} { left | h right | _ {L ^ 2 ( mu)} ^ 2} = 1- left | kappa right | _ { mathfrak L (L ^ 2_0 ( mu))},$$ where do we consider $$kappa$$ as a non-negative self-attachment operator in $$L ^ 2 ( mu)$$. With this definition, the right side of $$(5)$$ it is at most $$left ( frac2 { operatorname {gap} mathcal L} -1 right) operatorname {Var} _ mu (g) $$.
## stochastic processes: variance of a random variable obtained from a linear transformation
Edit: I needed to review this question as suggested.
Suppose there are $$N$$ Realizations of the Gaussian process denoted as vectors $$mathbf {z} _ {j} in mathbb {R} ^ {n}$$ for $$j = 1, ldots, N$$. Leave $$and$$ be a random variable such that $$y = sum_ {j = 1} ^ {N} ( mathbf {B} mathbf {z} _ {j}) (i)$$
where $$mathbf {B}$$ It is a unitary matrix. What is the variance of $$y2$$?
Explanation: Boldface represents the vector or matrix. $$( mathbf {B} mathbf {x}) (i)$$ represents the $$i$$-th vector entry $$mathbf {B} mathbf {x}$$.
## probability – variance of a fair currency
Consider that Vamshi decides to throw a fair coin repeatedly until he gets a tail. He does almost $$4$$ draws.The value of the variance $$T$$ is ______
I have tried this standard deviation in student marks
$$x —– 1 —– 2 —— 3 —– 4$$
$$P (x) —- frac {1} {2} —– frac {1} {4} —— frac {1} {8} —- frac {1} {16}$$
The average is $$frac {1} {4} left ( frac {1} {2} + frac {1} {2 ^ 2} + frac {1} {2 ^ 3} + frac { 1} {2 ^ {} right) = frac {15} {64}$$
Then, the variance will be,
$$frac {1} {4} left ( left ( frac {15} {64} – frac {1} {2} right) ^ {2} + left ( frac {15} {64 } – frac {1} {4} right) ^ {2} + left ( frac {15} {64} – frac {1} {8} right) ^ {2} + left ( frac {15} {64} – frac {1} {16} right) ^ {2} right) = frac {460} {16384}$$
But the answer is like that.
$$E left (X ^ { right) = 1 ^ {2} times frac {1} {2} + 2 ^ {2} times frac {1} {4} + 3 ^ {2 } times frac {1} {8} + 4 ^ 2 times frac {1} {16}$$
$$E left (X right) = 1 times frac {1} {2} +2 times frac {1} {4} +3 times frac {1} {8} +4 times frac {1} {16}$$
$$V left (X right) = E left (X ^ {2} right) – left (E left (X right) right) ^ {2} = frac {252} { 256}$$
Why is my approach giving incorrect results?
## probability – the variance of a sample from a normal population
Please consider the problem and my solution below. I agree with the answer in the back of the book, but somehow, my solution does not seem right to me. I did
Do it the right way?
Move
Issue:
A normal population has a variation of $$15$$. If samples of size $$5$$ are extracted from this population, which
you can expect the percentage to have variations (a) less than $$10$$?
$$10$$?
Leave $$S ^ 2$$ be the variance of the sample and $$n$$ be the size of the sample The expression $$nS ^ 2 / sigma ^ 2$$ will have a chi-square distribution with $$4$$ degrees of freedom.
begin {align *} sigma ^ 2 & = 15 \ frac {nS ^ 2} { sigma ^ 2} & = frac {5S ^ 2} {15} = frac {S ^ 2} {3} \ S ^ 2 & = 10 \ frac {nS ^ 2} { sigma ^ 2} & = 10/3 \ end {align *}
Using R we find:
pchisq (10/3, df = 4) = 0.496
Therefore the answer is $$0.496$$.
## probability: variance of the point product of a random binary vector and a constant vector
Leave $$q in mathbb {R} ^ N$$ and Z a random binary vector s.t. The sum of its elements is n.
The variance of the statistics. $$Z cdot q = sum ^ N i = 1} Z_i q_i$$ It should be $$frac {N-n} {N-1} times n times frac { sum ^ N _ {i = 1} (q_i – bar q) ^ 2} {N}$$However, I do not understand how to achieve this equality.
What I do know is that
1. $$E[Zcdot q] = n bar q$$
2. $$P[Z_i =1, Z_j =1] = frac {n (n-1)} {N (N-1)}$$
I also got to this: $$Z cdot q = sum ^ N_ {i = 1} (q_i – bar q) ^ 2 frac {n-1} {N ^ 2} + sum ^ N_ {j = 1} sum ^ N k = 1, k neq j} (q_j – bar q) (q_k – bar q) frac {n} {N} ( frac {n-1} {N-1} – frac {n} {N})$$
but I do not know how to simplify this expression even further. Can someone help me, please?
## non-linear optimization: a unit vector that maximizes variance in a discrete probability distribution
This could be a silly question, but I was working on a problem and found the following (sub) problem.
Suppose we have a non-negative vector $$pi in mathbf {R} ^ n$$ what satisfies $$sum_ {i = 1} ^ n pi_i = 1$$, that is, it is a discrete probability density. We want to choose a unit vector. $$v in mathbf {R} ^ n$$, $$| v | = 1$$, where $$| cdot |$$ It is the Euclidean norm, such that "variance".
$$f (v) = sum_ {i = 1} ^ n v_i ^ 2 pi_i – ( sum_ {i = 1} ^ n v_i pi_i) ^ 2$$
it is maximized. Of course there will be multiple maximums, because $$f (v) = f (-v)$$.
Are there closed form solutions for the maximum? $$v$$ or some idea of how to find it, or what is the maximum value of the function $$f (v)$$?
Specifically, I want to show that if $$pi_1> 0.5$$, then any maximum $$v$$ satisfy $${ rm sign} (v_1) = – { rm sign} (v_i)$$ for all $$i neq 1$$.
Any advice or ideas? Thank you!
## performance – Kubernetes – high variance in response times
We have configured the single node of Kubernetes and then the cluster of multiple nodes using kubeadm and we are experiencing performance problems.
We have measured response times and are experiencing variations in response times; Sometimes the execution will be fast and it will be completed in a few seconds, and sometimes the response times will double or triple and, in exceptional cases, they will shoot at a very high irrational value.
1. I tried to invoke the same endpoint for a period of time. There is almost always an initial period of 10 to 20 seconds when the latency is quite high, after which it normalizes to somewhat better values. However, these APIs are not designed for such a scenario.
2. A real-world scenario of suede with random reflection times was used and the selected endpoints were invoked for a period (30.60 seconds). In this case there is no pronounced start phase with peaks, but the variation mentioned above is there.
I would appreciate suggestions on what to look for in order to understand and mitigate this problem. When testing the API endpoint, the payload and activity in the cluster remained constant.
We are having the Kubernetes configuration using EC2 in AWS. We have used tools like weavescope and htop and we have not found hunger in terms of CPU or memory.
• A single node is an EC2 instance of 8 cores and 32 GB.
• Multi-node is a configuration of three nodes (Master: 4 cores, 16 GB, 8 cores, 32 GB and 4 cores, 16 GB)
## probability – Reverse of sum of exponential random variables (mean and variance)
Assume $$X_1, X_2, points, X_n$$ following the exponential distribution with mean $$theta> 0$$ and the statistics:
$$T = sum limits_ {i = 1} ^ n x_i$$
I know that the sum of the exponential random variables follows the distribution of Gamma, but I can not infer anything about the inverse of the sum $$frac {1} {T}$$. My guess would be that:
$$E left[frac{1}{T}right] = frac {1} { theta} quad text {y} V left[ frac{1}{T} right] = frac {1} { theta ^ 2}$$
but why?
## random variables – Calculate the variance of the population using means and variances of the stratum
I have three strata (1-3) with the mean and variance of 25/5, 26.5 / 4.5 and 24.5 / 10 respectively. I have calculated that the sample sizes are 11, 4 and 25 (40 in total).
How can I calculate the total variance of the population according to the means and variances of the stratum?
My objective is to compare the SE of the SRS and the stratified sample.
## Probability: conditional expected value and variance of the average preservation propagation
My first post here, and my math skills are more than a little rusty. I have a simple question for you: suppose that Y is an average spread of X.
Is it always true that: E (X | X> Y) <E (Y | Y <X)? How to prove it?
Is it also true that, for some value of C> 0, Var (Y | Y> C) < Var(X|X>DO)?
Thank you very much in advance! | 2019-08-22 19:51:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 98, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8772067427635193, "perplexity": 442.8197117506238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317359.75/warc/CC-MAIN-20190822194105-20190822220105-00227.warc.gz"} |
https://martinvb.com/wp/the-vfd-series-part-1-the-ups-and-downs-of-spwm/ | # The VFD Series – Part 1: The ups and downs of SPWM
I’ve given up on trying to post periodically here. Let’s see if this brave act of reverse psychology has some effect on my productivity.
# Introduction
Recently, I’ve started working on firmware for a three-phase frequency inverter. While this is absolutely no technological revolution on the grand scheme of things, it’s certainly new ground for me – and deserves some proper notetaking. So, before we dive into anything, let’s do a quick overview. What’s a frequency inverter? Very broadly, a frequency inverter, or Variable Frequency Drive (VFD), is a device that takes a periodic input signal with a certain frequency and generates a periodic output signal with a different, controllable frequency.
Practically, we’ll be almost always dealing with sinusoidal (or sinusoidal-ish) input and output signals. The reason for that being that mains power is sinusoidal, and most loads we’re interested in (i.e., induction motors) require some form of rotating magnetic field that can be canonically generated by a superposition of sine waves. Also, on a typical industrial context, mains power is three-phase AC (figure below, left). On this setup, you have three sinusoidal waves, 120º apart. There are a lot of inherent advantages to this arrangement, including a reduction in wire count and gauge, easier hookup to loads and so on. Three-phase AC is a whole world in and of itself, and since I’m not qualified to give you a tour of it, I’ll leave its further comprehension as an exercise to the reader (ElectroBOOM to the rescue).
On the above figure, to the right, we see a simplified representation of a three-phase VFD: first, the input three-phase AC signal is rectified into a constant(-ish) DC potential; then, using some form of switching element (e.g., MOSFETs, IGBTs) an output three-phase signal with the desired frequency is generated by combining the output of the U, V and W legs of the circuit. This, in itself, is already the first challenge to be faced: $T1, T2, … T6$ are most efficient when operating in the saturation region – i.e., as hard switches, either fully on, or fully off. So, how can we produce a sinusoidal output signal via elements with a binary behavior?
# Mathemagics: the sideways ZOH
First and foremost, by looking at the above diagram, it is very clear that no two switches on the same leg should be simultaneously active at any time – that would configure a short circuit, raise some magic smoke and probably pop a breaker. With that out of the way, we assume that each leg of the VFD can only be in one of two states: on (i.e., upper switch closed, bottom one open), or off (upper switched open, bottom one closed). This means each phase’s output can be either tied to the DC bus’ voltage, or to the ground/reference potencial.
So, we are interested in (re)constructing a signal using nothing but switches. Some quick googlin’ points us to a very commonly used technique for signal reconstruction, called Zero-Order Hold (ZOH). This technique allows us to create a continuous-time signal from a discrete-time signal (e.g., a series of numeric values), by holding constant each of its values for an interval $T$, as shown in the figure below. In a certain sense, this is akin to a Riemann sum, with the area under each sample acting as an approximation to the area under the original signal at that interval. By setting an appropriate $T$ according to the Nyquist-Shannon sampling theorem, the output $b_{zoh}(t)$ signal will contain the harmonic content of the original $r(t)$. Of course, higher harmonics will be present due to the hard transitions between levels, but these should be filtered off.
In this definition, the signal $b_{zoh}(t)$ has arbitrary, non-discrete values (i.e., the sampled values of $r(kT), k \geq 0$). However, as mentioned before, the VFD we’re dealing with only allows each phase to be in two discrete states: fully on or fully off. How to circumvent this? While we can’t control the VFD’s amplitude, we can control how long we keep the signal on or off – that is, each pulse can have its width modulated (PWM). Thinking along the lines of the aforementioned Riemann sum, we can try “tipping” each sample of $b_{zoh}(t)$ on its side. So, assume that on an instant $t_0$, our signal of interest $r(t_0) = f_0$ (figure below, left). Our reconstructed signal $b_{zoh}(t)$ will hold the value $f_0$ during the interval $T = t_1 – t_0$, which results on an area $A_0 = (t_1 – t_0)f_0 = Tf_0$.
Since our VFD signal $b_{pwm}(t)$ can only be 0 or the maximum amplitude $a$ (figure above, center), we wish to find the instant $t_s$ where the area $A_s = (t_s – t_0)a$ approximately matches $A_0$. This can be written straightforwardly as
\begin{align*} A_s &= A_0 \\ a(t_s – t_0) &= f_0(t_1 – t_0) \\ t_s &=\frac{f_0}{a}(t_1 – t_0) + t_0 \end{align*}
By plugging the above expression for $t_s$ into $A_s = (t_s – t_0)a$, we get $A_s = Tf_s$. For an adequate interval $T$, we can safely assume that $f_0 \approx f_s$, and thus, $A_s \approx A_0$. Hooray. Now, by also assuming that $0 < r(t) < a, \forall t$, we can write the unsurprising relationship between $t_s$ and $f_s$, in each interval $T$, as
\begin{align*} f_s = a\frac{(t_s – t_0)}{(t_1 – t_0)} && \forall t, t_0 \leq t < t_1 \end{align*}
This relationship, copied-and-pasted over multiple $T$ intervals, yields the sawtooth-shaped carrier waveform $c(t)$, as drawn in the figure above, right. It now comes as fairly straightforward that $b_{pwm}(t) = a$ when $r(t) > c(t)$ and $b_{pwm}(t) = 0$ otherwise. With a bit of wit, we can now write $b_{pwm}(t)$ generically as
$$\label{eq:pwm} b_{pwm}(t) = a (\text{sign} [ r(t) – c(t) ])$$
Neat and compact. This form is also know as natural PMW (or naturally sampled PWM). And, in case you’re wondering: there’s this obscure thing called uniform PWM, but I won’t be touching that. Ever.
# Chop that sinewave… Julienne or Chiffonade?
By plugging the generic sinusoidal wave below
$$\label{eq:rsine} r(t) = R_0 + R_1 cos(2\pi f_1 + \theta_1)$$
in our equation \ref{eq:pwm} above, we get this neat thing called Sinusoidal Pulse Width Modulation (SPWM). Following the intuition developed in the last section, SPWM generates a waveform with the harmonic content of the desired sine wave, as shown in the figure below. I mean, that PWM signal looks like a jumbled mess, but it has the harmonics we’re looking for, believe me. Low-pass-filtering the signal will reveal the modulated sine wave.
The very attentive may have noticed that, in the above picture, the carrier wave $c(t)$ (in green), is not a sawtooth wave as previously defined, but a triangular wave. In actuality, if we follow the intuition outlined in the previous section, we’ll notice that any triangle-shaped $c(t)$ produces the same $A_0 = A_s$ area equivalence. In practical applications, however, only three different carrier waveforms are used, which yield three basic PWM schemes:
• Sawtooth: trailing-edge modulation, or left-aligned PWM (figure below, left
• Triangular: double-edge modulation, or center-aligned PWM (figure below, center)
• Inverse sawtooth: leading-edge modulation, or right-aligned PWM (figure below, right)
When I faced this carrier-wave-palette for the first time, my initial question was, “ok, cool. So which one do I pick? Is there any difference?”. As it turns out, there’s always some debate over which one to use, but fairly little quantitative approaches to the issue. From an implementation perspective, a triangular carrier wave is a bit more of a hassle. On an MCU, a sawtooth wave can be tipically spawned by a counter that simply counts up and overflows. A triangular wave, on the other hand, requires said timer to go up, then down again. This means that, for a set carrier wave frequency, you have to either provide such counter with a clock signal twice as fast, or sacrifice one bit in the counter’s resolution. But beyond that, is there any modulation strategy that reduces undesired harmonics in the output signal?
This question is relevant for a couple reasons. First, VFD output isn’t usually filtered before it gets to the load – in fact, the load itself acts as a filter. In the case of an induction motor as load, the coils act as RL-filters, smoothing out the input current. Still, lots of undesired harmonic content in the signal might reduce efficiency, produce heat, and cause vibration and audible noise due to magnetostriction
# … but if you judge a fish by its ability to modulate a wave …
Cool. So, at this point, the goal is very clear: evaluate the VFD output from different PWM schemes. But evaluate according to what? Let’s pick two neat ones: harmonic content and total harmonic distortion.
### Harmonic Content
We are clearly interested in evaluating which harmonics appear on each PWM scheme. My first thought was to look at the $C_n$ coefficients of the compact trigonometric Fourier series, as defined in equation \ref{eq:trigfourier} below.
\begin{align} \label{eq:trigfourier} f(t) = C_0 + \sum_{n=1}^{\infty} C_n \text{cos}(n\omega_0t + \theta_n) \end{align}
Unfortunately, things are a bit more hairy than that. If we take a look at the PWM equation \ref{eq:pwm} above, we see the obvious fact the function is periodic in both $r(t)$ and $c(t)$. To apply the Fourier definition above, we’d need to come up with a closed form for a single period of $\text{sign} [ r(t) – c(t) ])$, which is clearly algebraical masochism (or suicide). In order to analytically analyze the Fourier expansion for such a function, we need to introduce the Double Fourier Series Method: for any function $f(x, y)$, periodic in both $x$ and $y$, with a period of $2\pi$ in both axes* we can write:
\begin{align}\label{eq:doublefourier} f(x, y) &= C_{00} + \sum_{n=1}^{+\infty}C_{0n} \text{cos}(ny+\theta_{0n})+ \sum_{m=1}^{+\infty}C_{m0}\text{cos}(mx + \theta_{m0}) \notag \\ &+ \sum_{m=1}^{+\infty}\sum_{n=\pm1}^{\pm\infty}C_{mn}\text{cos}(mx + ny + \theta_{mn}) \end{align}
Well, first off, for all the math inclined folks out there: sorry, but I’m not touching this with a ten foot pole – I’ll be going down the numerical route. However, if you do wish to check out its analytical expansion for various PWM strategies, check this document. Regardless, looking at this expression does provide us with some neat insight about what we should expect to see: the first term on the right-hand side of equation \ref{eq:doublefourier} represents a DC component, while the second and third terms represent the harmonics of both $y$ and $x$, respectively – these are identical to the one-dimensional Fourier Series in equation \ref{eq:trigfourier} above. More interesting, however, is the fourth term in that expression. It expresses the sideband frequencies that are generated as a result of the modulation process. We see that $n$ assumes positive and negative integer values, thus yielding upper- and lower-sideband (USB and LSB) spectra around each main harmonic of the carrier frequency.
*If we want \ref{eq:pwm} to fit this criterion, we could write it as $b_{pwm}(t) = f(x, y) = \text{sign}[r(x) – c(y)]$, $y = 2\pi f_1 + \theta_1$, $x = 2\pi f_c + \theta_c$, where $f_c$ is the carrier’s frequency and $f_1$ is the modulated sine’s frequency.
### Total Harmonic Distortion
While the Fourier series gives us detailed information on the signal’s harmonics, the Total Harmonic Distortion (THD) factor gives us a handy ratio between the harmonics we care about, and the ones we do not. As mentioned, we are interested in producing a pure sine wave, and as such, we care about only a single fundamental harmonic – everything else may be properly labeled as distortion. Our THD can thus be expressed as
$$\label{eq:thd} \text{THD}_F = \frac{\sqrt{\sum_{n=2}^\infty v_n^2}}{v_1}$$
where $v_n$ is the amplitude of the n-th harmonic and the in “$\text{THD}_F$” stands for fundamental. Pure sine waves have a $\text{THD}_F = 0$, square waves have a $\text{THD}_F = 48.3\%$ (percent is a common representation for THD), and higher factors indicate higher distortion – in our case, meaning that less power is going were it should.
# Three-sum
Naturally, we are interested in performing the aforementioned evaluations on the VFD’s output. To that end, our last missing ingredient is a means of properly combining the signals of the U, V and W legs into a single output signal. As discussed in the introduction, we are interested in three-phase AC inputs and outputs. Such signals can be easily visualized as a phasor projected onto three base vectors, 120º degrees apart** – each projection representing an individual phase. Well, since a picture is worth a thousand words, let the shamelessly stolen GIF below talk for itself:
In order to properly compute the magnitude of the rotating equivalent vector above (in black), we need to to represent it in an orthonormal basis (as we see above $\{U, V, W\}$ only span $\mathcal{R}^2$, and hence do not fulfill the criteria for orthonormality). Let’s thus pick the $\{\alpha, \beta\}$ vectors below as our new basis:
The choice of $\{\alpha, \beta\}$ is arbitrary (as long as they are orthonormal), but done to simplify upcoming calculations (since $\alpha$ represents the real part of the signal, and $\beta$, its imaginary part). Now, with a bit of trigonometry, we can represent a periodic signal $v(t)$ shown above in our new base:
$$\label{eq:clarke} v(t) = \begin{bmatrix} v_{\alpha} \\ v_{\beta} \end{bmatrix} = { \frac{2}{3} } { \small \begin{bmatrix} 1 & -1/2 & -1/2 \\ 0 & \sqrt{3}/2 & \sqrt{3}/2 \end{bmatrix} } \begin{bmatrix} v_U \\ v_V \\ v_W \end{bmatrix}$$
This relationship is know as the Clarke transform, and is frequently used in the analysis of three-phase AC circuits. Let’s now assume that the U, V and W phases are each producing individual SPWM signals as per the equation $\ref{eq:pwm}$, all 120º degrees apart from each other (figure below, right) – notice that the phases’ outputs were normalized to the $[0, 1]$ range. We can now combine them via our definition \ref{eq:clarke} above, to produce the actual output signal of our VFD (real part $\alpha$ drawn in the figure below, right):
It is worth noting how each phase is capable of producing only unipolar signals – i.e., signals ranging from 0V to the $V_{DC}$ voltage of the VFD’s DC bus. Their combined output, however, yields true bipolar output. While this might be slightly counter-intuitive at first, imagine all three phases producing a steady PWM signal with a 50% duty cycle. This “balance point” produces zero output (since $V_\alpha = 1*0.5 – 1/2*0.5 – 1/2*0.5 = 0$, as per equation \ref{eq:clarke}). From this state, by changing the value of one or more phases, we can produce arbitrary output vectors with magnitudes ranging from $-2V_{DC}/3$ to $2V_{DC}/3$. For a bit more discussion on that topic, check this out.
**In case you’re wondering, this 120º offset between the phases is ultimately related to the physical placement of the stator windings inside three-phase motors and generators.
# Last, but not least
We are finally ready to answer the question we posed several paragraphs ago: which PWM strategy is best? Left-aligned, right-aligned or center-aligned?
First off, as we’ve discussed before, our evaluation tools only care about the frequency spectra of our signals. So, taking into account the time reversibility of the Fourier series, and noting that left- and right-aligned SPWM waveforms are mirror images of each other, we immediately know that they’re equivalent (for our intents and purposes). So, we’ll only compare trailing-edge and double-edge modulations.
To generate the SPWM signals, I’ve implemented the SPWM definion (i.e., combining equations \ref{eq:pwm} and \ref{eq:rsine}) in Matlab (Code!). Computing the Fourier single-sided amplitude spectrum (Code!), as well as computing $\text{THD}_F$ (Code!) was also done in Matlab. [Any feedback on the correctness of these code snippets would be greatly appreciated] The generated plots are shown below. U, V, and W phases are modulating a 50Hz sinewave with an amplitude modulation index of 0.8, on a 5kHz carrier (identical as shown in the figure above, left), so the expected output amplitude waveform is $0.8*0.5 = 0.4$:
And, voilà. We immediately see that in both modulation schemes, the desired fundamental is there, almost perfectly in the desired amplitude ($0.39 \approx 0.4$) – yey! all that SPWM hassle does work after all. Now, interestingly, we have a rather curious result with the FFT spectra and the THD factors. At first, we see that Trailing-Edge modulation has a somewhat smaller distortion factor (70.13%), but its spectrum seems arguably more messy. Moreover, we can see that Double-Edge modulation seems to have much less harmonic content (in fact, it has exactly half of the sideband harmonics, as verifiable if we expand equation \ref{eq:doublefourier} analytically for each scenario, as seen here). On top of that, the fundamental switching harmonics (around 5kHz) have smaller amplitudes in the Double-Edge scheme. So, what gives?
It seems that the higher-order harmonics of the Double-Edge modulation tend to weight-in more heavily in the quadratic sum of the THD factor, yielding a higher overall distortion (88.94%). In practice, however, the RL-filter-characteristic of VFD loads will have a cutoff frequency around the hundreds of Hz, so realistically, harmonics above the switching frequency will have almost no effect***. So, we can confidently argue that, in practical applications, Double-Edge modulation – a.k.a. center-aligned PWM – does produce less harmonics, which seems to echo the faint opinions on the topic that float around the interwebs.
Now, of course: as the image above shows, the difference isn’t all that extreme, and as we’ve discussed above, there’s a bit more implementation effort associated with center-aligned PWM. So, once again – and slightly disappointingly – YMMV.
***Very broad and oversimplified generalization. Don’t sue me if you fry your setup testing something I’ve said.
# Disclaimer (& closing thoughts)
Well, let’s just make it very clear: this whole thing is a somewhat brief write-up of my latest incursions in what’s an unknown territory for me. I’ve been figuring stuff out on the fly, so if you spot anything wrong, please, let me know
Edit: In a recent conversation, a friend of mine and a literal master of all-things-electric, Julio, added some very relevant information to this mix. He confirmed that, in practical applications, the choice of carrier waveform is not of great impact. However, when implementing a VFD (e.g. on a MCU) with any kind feedback control loop, the peaks and valleys in the triangular carrier of the center-aligned PWM can be used to synchronize ADC samplings of the generated waveform (figure below). This ensures that the sampling doesn’t happen during switching, reducing measurement noise (and, in case of current measurements, providing that pulse’s average value). This article goes into more depth into that, and it’s worth taking a look. He also mentioned that sawtooth carriers (in trailing- and leading-edge modulations) essentially synchronize the switching in all phases. This increases output noise due do parasitics in the circuit/load (stuff that we did not capture in this write-up), and can be a real issue in high-power applications. Thanks for the insight, Julio!
’til next time. | 2022-12-07 00:53:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.8905041813850403, "perplexity": 1040.788832791016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00527.warc.gz"} |
https://www.nature.com/articles/s41598-018-38122-0?error=cookies_not_supported&code=1eee534d-34e6-4812-92e9-68e5658b4af6 | ## Introduction
Vibration of FGM cylindrical shell is a widely studied area of research in theoretical and applied mechanics. Among a large number of studies on vibrations of cylindrical shells (CS) we cite a few. Arnold and War-burton1,2 is executed some influential work on shell frequency analysis. Shell vibration analysis carried out by employing different numerical techniques like Galerkin method, Rayleigh Ritz method, different quadrature method and finite difference method. These shells are fabricated by isotropic, laminated and multi-layered materials. Functionally graded materials have been developed by applying powder technology. Functionally graded materials are utilized for various objectives because of their proper material distribution in their fabrication. They are mostly used for high pressure and heat dominant surroundings. Sharma et al.3 scrutinized behaviour of vibrations for cylinder-shaped shells by employing the Rayleigh Ritz technique for clamped-free boundary conditions. Loy et al.4 analysed the fundamental frequencies of circular shaped shells by a generalized differential quadrature method (DQM). Further Loy et al.5 investigated the vibrations of functionally graded (FG) cylindrical shells fabricated by stainless steel and nickel. They showed the effects of formations of essential constituents on the frequencies. Moreover, Pardhan et al.6 explored vibration behaviour of FG cylindrical shells fabricated by stainless steel and zirconia for different edge conditions. Zhang et al.7 scrutinized free vibrations of cylindrical shells for different edge conditions by employing a local adaptive DQM. Naeem et al.8 employed a generalized DQM for the functionally graded material cylindrical shells to investigate vibration behaviour. Pellicano9 showed the response of an isotropic cylindrical shell for linear and non-linear vibrations by employing analytical experiment method. Vibration study of FG cylindrical shells has been done by Iqbal et al.10 and the shell governing motion equations were solved by using wave propagation technique. This technique was exceptionally helpful for vibration analysis. Axial modal dependence was estimated with help of beam functions in exponential form. Li et al.11 determined free vibration analysis of three layered cylindrical shells with FG material central layer. Flugge’s shell theory was used by them. Vel12 observed free and forced vibration of cylinder-shaped shell by using the elasticity solution technique for simply - supported conditions at both ends. Lam et al.13 showed the frequency vibration behaviour of multi layered FGM cylindrical shells for different edge conditions. Arshad et al.14,15 studied the FGM cylindrical shell for vibration frequency analysis with simply - supported end point conditions under different volume fraction laws. They used Love’s shell theory. Rayleigh Ritz technique was employed by them to solve the problem. Further he investigated vibration characteristics of FGM cylindrical shell with the effect of different edge conditions for exponential volume fraction law. Shah et al.16 analysed vibrations of NFs for fluid filled and empty CS constructed by elastic foundation. Naeem et al.17 explored the vibration behaviour of three layered functionally graded material cylindrical shell for different edge conditions. The internal and external layers were fabricated by FG materials whereas the central layer was of isotropic material. They used the Love’s thin shell theory. Arshad et al.18 examined the vibrations of natural frequencies of bi-layered cylinder-shaped shell. One layer was fabricated by isotropic material and the other was of functionally graded material. Rayleigh Ritz technique was utilized. Shah et al.19 scrutinized the vibration behaviour of three layered FGM CS constructed by Winkler and Pasternak basis. They used wave propagation approach for the solution of the model.
Ahmad and Naeem20 investigated vibrations of rotating cylindrical shells composed of FG materials. Natural frequencies of cylindrical shell were studied with effects of volume fraction law and different ratios.
## Theoretical Consideration
Consider a cylinder-shaped shell of radius R, thickness h and length L as shown in Fig. 1. An orthogonal coordinate system (x, θ, z) is fixed at the middle surface of the cylindrical shell, where x, θ and z lie in the axial, circumferential and radial directions of the shell, and (u, v, w) are the displacements of the shell in x, θ and z directions respectively.
The strain energy for a CS is represented by $$\Im$$ and is written as
$$\Im =\frac{1}{2}{\int }_{0}^{L}{\int }_{0}^{2\pi }\{\mathop{{\rm{K}}}\limits_{\mbox{'}}\}^{\prime} [S]\{\mathop{{\rm{K}}}\limits_{\mbox{'}}\}Rd\theta dx,$$
(1)
where
$$\{\mathop{{\rm{K}}}\limits_{\mbox{'}}\}^{\prime} =\{{\varepsilon }_{1},\,{\varepsilon }_{2},\,\gamma ,\,{K}_{1},\,{K}_{2},\,2\tau \},$$
(2)
where ε1, ε2, γ and K1, K2, τ represent the strains and curvatures reference surface relations respectively. Prime (′) denotes the transpose of a matrix. These relations are taken from Sanders’ shell theory and written as:
$$\{{\varepsilon }_{1},{\varepsilon }_{2},\gamma \}=\{\frac{\partial u}{\partial x},\frac{1}{R}(\frac{\partial v}{\partial \theta }+w),(\frac{\partial v}{\partial x}+\frac{1}{R}\frac{\partial u}{\partial \theta })\},$$
(3)
$$\{{K}_{1},{K}_{2},\tau \}=\{\,-\frac{{\partial }^{2}w}{\partial {x}^{2}},-\frac{1}{{R}^{2}}(\frac{{\partial }^{2}w}{\partial {\theta }^{2}}-\frac{\partial v}{\partial \theta }),-\,\frac{2}{R}(\frac{{\partial }^{2}w}{\partial x\partial \theta }-\frac{3}{4}\frac{\partial v}{\partial x}+\frac{1}{4R}\frac{\partial u}{\partial \theta })\},$$
(4)
and [S] is defined as
$$[S]=[\begin{array}{llllll}{a}_{11} & {a}_{12} & 0 & {b}_{11} & {b}_{12} & 0\\ {a}_{12} & {a}_{22} & 0 & {b}_{12} & {b}_{22} & 0\\ 0 & 0 & {a}_{66} & 0 & 0 & {b}_{66}\\ {b}_{11} & {b}_{12} & 0 & {d}_{11} & {d}_{12} & 0\\ {b}_{12} & {b}_{22} & 0 & {d}_{12} & {d}_{22} & 0\\ 0 & 0 & {b}_{66} & 0 & 0 & {d}_{66}\end{array}],$$
(5)
where aij denote the extensional stiffness, bij the coupling stiffness and dij the bending stiffness. (i, j = 1, 2 and 6). They are defined as:
$$\{{a}_{ij},{b}_{ij},{d}_{ij}\}={\int }_{-\frac{h}{2}}^{\frac{h}{2}}{{,}\kern-0.4em \rm{O}}_{ij}\{\mathrm{1,}\,z,\,{z}^{2}\}dz\mathrm{.}$$
(6)
For isotropic materials $${{,}\kern-0.4em \rm{O}}_{ij}$$ is the reduced stiffness stated as Loy et al.5
$${{,}\kern-0.4em \rm{O}}_{11}={{,}\kern-0.4em \rm{O}}_{22}=E{\mathrm{{\rm{{-}\kern-0.5em \lambda}}}^{2})}^{-1},\,{{,}\kern-0.4em \rm{O}}_{12}={{\rm{{-}\kern-0.5em \lambda}}}E{\mathrm{{\rm{{-}\kern-0.5em \lambda}}}^{2})}^{-1},\,{{,}\kern-0.4em \rm{O}}_{66}=E{\mathrm{(2(1}+{{\rm{{-}\kern-0.5em \lambda}}}))}^{-1}.$$
(7)
Here Young’s modulus represented by E and $${{\rm{{-}\kern-0.5em \lambda}}}$$ denotes the Poisson ratio. The bij coupling stiffness turn to zero for homogenous CS and ≠0 for FGM cylindrical shells and values of bij depend on the material distribution. Also bij become negative and positive due to irregularity of material properties at the mid plan. $${{,}\kern-0.4em \rm{O}}_{ij}$$ depend on physical properties of FG materials.
With the help of expression (2) and (5), $$\Im$$ is written as:
$$\begin{array}{rcl}\Im & = & \frac{1}{2}{\int }_{0}^{L}{\int }_{0}^{2\pi }\{{a}_{11}{{\varepsilon }_{1}}^{2}+{a}_{22}{{\varepsilon }_{2}}^{2}+2{a}_{12}{\varepsilon }_{1}{\varepsilon }_{2}+{a}_{66}{\gamma }^{2}+2{b}_{11}{\varepsilon }_{1}{K}_{1}\\ & & +\,2{b}_{12}{\varepsilon }_{1}{K}_{2}+2{b}_{12}{\varepsilon }_{2}{K}_{1}+2{b}_{22}{\varepsilon }_{2}{K}_{2}+4{b}_{66}\gamma \tau \\ & & +\,{d}_{11}{{K}_{1}}^{2}+{d}_{22}{{K}_{2}}^{2}+2{d}_{12}{K}_{1}{K}_{2}+4{d}_{66}{\tau }^{2}\}\,Rd\theta dx\mathrm{.}\end{array}$$
(8)
By putting these expressions (3) and (4) in the expression (8) then $$\Im$$ attains the following form:
$$\Im =\frac{R}{2}{\int }_{0}^{2\pi }{\int }_{0}^{L}[{a}_{11}{(\frac{\partial u}{\partial x})}^{2}+\frac{{a}_{22}}{{R}^{2}}{(\frac{\partial v}{\partial \theta }+w)}^{2}+\frac{2{a}_{12}}{R}\frac{\partial u}{\partial x}{(\frac{\partial v}{\partial \theta }+w)}^{2}+{a}_{66}{(\frac{\partial v}{\partial x}+\frac{1}{R}\frac{\partial u}{\partial \theta })}^{2}-{b}_{11}(\frac{\partial u}{\partial x})(\frac{{\partial }^{2}w}{\partial {x}^{2}})-\frac{2{b}_{12}}{{R}^{2}}(\frac{\partial u}{\partial x})(\frac{{\partial }^{2}w}{\partial {\theta }^{2}}-\frac{\partial v}{\partial \theta })-\frac{2{b}_{12}}{R}(\frac{\partial v}{\partial \theta }+w)(\frac{{\partial }^{2}w}{\partial {x}^{2}})-\frac{2{b}_{22}}{{R}^{3}}(\frac{\partial v}{\partial \theta }+w)(\frac{{\partial }^{2}w}{\partial {\theta }^{2}}-\frac{\partial v}{\partial \theta })-\frac{4{b}_{66}}{R}(\frac{\partial v}{\partial x}+\frac{1}{R}\frac{\partial u}{\partial \theta })(\frac{{\partial }^{2}w}{\partial x\partial \theta }-\frac{3}{4}\frac{\partial v}{\partial x}+\frac{1}{4R}\frac{\partial u}{\partial \theta })+{d}_{11}{(\frac{{\partial }^{2}w}{\partial {x}^{2}})}^{2}+\frac{{d}_{22}}{{R}^{4}}{(\frac{{\partial }^{2}w}{\partial {\theta }^{2}}-\frac{\partial v}{\partial \theta })}^{2}+\frac{2{d}_{12}}{{R}^{2}}(\frac{{\partial }^{2}w}{\partial {x}^{2}})(\frac{{\partial }^{2}w}{\partial {\theta }^{2}}-\frac{\partial v}{\partial \theta })+\frac{4{d}_{66}}{{R}^{2}}(\frac{{\partial }^{2}w}{\partial x\partial \theta }-\frac{3}{4}\frac{\partial v}{\partial x}+{\frac{1}{4R}\frac{\partial u}{\partial \theta })}^{2}]dxd\theta \mathrm{.}$$
(9)
Shell kinetic energy is symbolized by Ì and is stated as:
$${\rm{I}}=\frac{1}{2}{\int }_{0}^{L}{\int }_{0}^{2\pi }{\rho }_{t}[{(\frac{\partial u}{\partial t})}^{2}+{(\frac{\partial v}{\partial t})}^{2}+{(\frac{\partial w}{\partial t})}^{2}]\,Rd\theta d\mathrm{.}$$
(10)
Here variable t designates the time. Mass density is represented by ρ and ρt denotes the mass density for each unit length and it is expressed as:
$${\rho }_{t}={\int }_{-\frac{h}{2}}^{\frac{h}{2}}\rho dz\mathrm{.}$$
(11)
The Lagrange energy functional denoted by $${\mathcal L}$$ for a cylinder-shaped shell is formulated by the difference of kinetic and strain energies as:
$${\mathcal L} =I-\Im .$$
(12)
## Numerical Procedure
The Rayleigh-Ritz procedure is used to achieve the natural frequencies of cylindrical shell. Now the displacement fields are presummed by the following relations:
$$\begin{array}{ccc}u(x,{\theta },t) & = & {x}_{m}U\,(x)\,\cos \,(n{\theta })\,sin{\boldsymbol{\omega }}t,\\ v(x,{\theta },t) & = & {y}_{m}V\,(x)\,\sin \,(n{\theta })\,cos{\boldsymbol{\omega }}t,\\ w(x,{\theta },t) & = & {z}_{m}W\,(x)\,\cos \,(n{\theta })\,sin{\boldsymbol{\omega }}t,\end{array}$$
(13)
where xm, ym and zm represent the amplitudes of vibration in the x, θ and z direction respectively, the axial and circumferential wave numbers of mode shapes are denoted by m and n respectively, ω signifies the angular vibration frequency of the shell wave. U(x), V(x), and W(x), denotes the axial model dependence in the longitudinal, circumferential and transverse directions respectively. Here we take $$U(x)=\frac{d\phi (x)}{dx},\,V(x)=\phi (x),\,W(x)=\phi (x)$$, where φ(x) represents the axial function which satisfies the geometric edge conditions.
The axial function φ(x is taken as the beam function in the following form,
$$\phi \,(x)={\beta }_{1}\cos \,h\,({\mu }_{m}x)+{\beta }_{2}cos\,({\mu }_{m}x)-{\sigma }_{m}\,({\beta }_{3}sinh\,({\mu }_{m}x)+{\beta }_{4}sin\,({\mu }_{m}x))$$
(14)
Here values of βi are changed with respect to the edge conditions. (i = 1, 2, 3, 4) μm signify the roots of some transcendental equations and σm are parameters which depend on the values of μm.
For generalization of this problem following non-dimensional parameters are used.
$$\begin{array}{rcl}\underline{{U}_{1}} & = & \frac{U(x)}{h},\,\underline{{V}_{1}}=\frac{V(x)}{h},\,\underline{{W}_{1}}=\frac{W(x)}{R},\\ \underline{{a}_{ij}} & = & \frac{{a}_{ij}}{h},\,\underline{{b}_{ij}}=\frac{{b}_{ij}}{{h}^{2}},\,\underline{{d}_{ij}}=\frac{{d}_{ij}}{{h}^{3}},\\ \alpha & = & R/L,\,\beta =h/R,\,X=\frac{x}{L},\,\underline{{\rho }_{t}}=\frac{{\rho }_{t}}{h}.\end{array}$$
(15)
Now expression (13) is altered into the following form
$$\begin{array}{rcl}u(x,\theta ,t) & = & h{x}_{m}{U}_{1}\,\cos \,(n\theta )\,sin\omega t,\\ v(x,\theta, t) & = & h{y}_{m} {V}_{1}\,\sin \,(n\theta )\,cos\omega t,\\ w(x,\theta ,t) & = & R{z}_{m}{W}_{1}\,\cos \,(n\theta )\,sin\omega t\mathrm{.}\end{array}$$
(16)
After substituting expression (3.4) into the expressions for $$\Im$$ and Ì, we get $$\Im$$max, Ìmax and $${ {\mathcal L} }_{max}.$$ Then Lagrangian functional $${ {\mathcal L} }_{max}$$ transformed into the following form by applying the principle of maximum energy.
$${ {\mathcal L} }_{max}=\frac{\pi hLR}{2}[{R}^{2}{\omega }^{2}{\underline{\rho}}_{t}{\int }_{0}^{1}({\beta }^{2}{({x}_{m}\underline{{U}_{1}})}^{2}+{\beta }^{2}{({y}_{m}\underline{{V}_{1}})}^{2}+{({z}_{m}\underline{{W}_{1}})}^{2})dX-{\int }_{0}^{1}\{{\alpha }^{2}{\beta }^{2}\underline{{a}_{11}}{({x}_{m}\frac{d\underline{{U}_{1}}}{dX})}^{2}+\underline{{a}_{22}}{(-n\beta {y}_{m}\underline{{V}_{1}}+{z}_{m}\underline{{W}_{1}})}^{2}+2\alpha \beta \underline{{a}_{12}}\times ({x}_{m}\frac{d\underline{{U}_{1}}}{dX})(\,-\,n\beta {y}_{m}\underline{{V}_{1}}+{z}_{m}\underline{{W}_{1}})+\underline{{a}_{66}}{(\alpha \beta {y}_{m}\frac{d\underline{{V}_{1}}}{dX}+n\beta {x}_{m}\underline{{U}_{1}})}^{2}-2{\alpha }^{3}{\beta }^{2}\underline{{b}_{11}}({x}_{m}\frac{d\underline{{U}_{1}}}{dX})({{z}_{m}}^{2}\frac{{d}^{2}\underline{{W}_{1}}}{d{X}^{2}})-2\alpha {\beta }^{2}\underline{{b}_{12}}({x}_{m}\frac{d\underline{{U}_{1}}}{dX})\times (-{n}^{2}{z}_{m}\underline{{W}_{1}}+n\beta {y}_{m}\underline{{V}_{1}})-2{\alpha }^{2}\beta \underline{{b}_{12}}(\,-\,n\beta {y}_{m}\underline{{V}_{1}}+{z}_{m}\underline{{W}_{1}})({{z}_{m}}^{2}\frac{{d}^{2}\underline{{W}_{1}}}{d{X}^{2}})-2\beta \underline{{b}_{22}}(\,-\,n\beta {y}_{m}\underline{{V}_{1}}+{z}_{m}\underline{{W}_{1}})\,(\,-\,{n}^{2}{z}_{m}\underline{{W}_{1}}+n\beta {y}_{m}\underline{{V}_{1}})-4\beta \underline{{b}_{66}}(\alpha \beta {y}_{m}\frac{d\underline{{V}_{1}}}{dX}+n\beta {x}_{m}\underline{{U}_{1}})(n\alpha {z}_{m}\frac{d\underline{{W}_{1}}}{dX}-\frac{3\alpha \beta {y}_{m}}{4}\frac{d\underline{{V}_{1}}}{dX}+\frac{n\beta }{4}{x}_{m}\underline{{U}_{1}})+{\alpha }^{4}{\beta }^{2}\underline{{d}_{11}}{({{z}_{m}}^{2}\frac{{d}^{2}\underline{{W}_{1}}}{d{X}^{2}})}^{2}+{\beta }^{2}\underline{{d}_{22}}{(-{n}^{2}{z}_{m}\underline{{W}_{1}}+n\beta {y}_{m}\underline{{V}_{1}})}^{2}+2{\alpha }^{2}{\beta }^{2}\underline{{d}_{12}}({{z}_{m}}^{2}\frac{{d}^{2}\underline{{W}_{1}}}{d{X}^{2}})(\,-\,{n}^{2}{z}_{m}\underline{{W}_{1}}+n\beta {y}_{m}\underline{{V}_{1}})+\,4\underline{{d}_{66}}{(n\alpha {z}_{m}\frac{d\underline{{W}_{1}}}{dX}-\frac{3\alpha \beta {y}_{m}}{4}\frac{d\underline{{V}_{1}}}{dX}+\frac{n\beta }{4}{x}_{m}\underline{{U}_{1}})}^{2}\}dX]\mathrm{.}$$
(17)
Rayleigh-Ritz procedure is employed to get the eigenvalue form problem of the shell frequency equation. The Lagrangian energy functional $${ {\mathcal L} }_{{\max }}$$ is minimized with regarding the vibration amplitudes xm, ym and zm as follows,
$$\frac{\partial { {\mathcal L} }_{{\max }}}{\partial {x}_{m}}=\frac{\partial { {\mathcal L} }_{{\max }}}{\partial {y}_{m}}=\frac{\partial { {\mathcal L} }_{{\max }}}{\partial {z}_{m}}=0.$$
(18)
The obtained equations by arrangements of terms are written in matrix form as
$$\{[C]-{{\rm{\Omega }}}^{2}[{,}\kern -.5em \rm{M}]\}\underline{X}=0$$
(19)
where
$${{\rm{\Omega }}}^{{\rm{2}}}={R}^{2}{\omega }^{2}{\underline{\rho }}_{t},$$
(20)
where [C] and [$${,}\kern-.5em \rm{M}$$] are the stiffness and mass matrices of the cylindrical shell respectively and its values are given supplementary file, and [C] contains the terms related material moduli nd the mass matrix [$${,}\kern -.5em \rm{M}$$] contains terms associated with shell mass,
$$\mathop{\underline{X}}\limits^{{^{\prime} }}=[{x}_{m},{y}_{m},{z}_{m}],$$
(21)
the shell vibrations are determined after solving the eigenvalue equation (19) with the help of MATLAB software.
## Classifications of Materials
In present study a cylindrical shell is considered constructed from three layers, the internal and external layers are fabricated by isotropic material while the central layer is constructed from FG materials nickel and stainless steel. The volume fractions14 of the shell middle layer constructed from two constituents using trigonometric volume fraction law (VFL) are given by the following relations:
$${V}_{f1}=si{n}^{2}({[\frac{3z}{h}+\frac{1}{2}]}^{\upsilon }),\,\,{V}_{f2}=co{s}^{2}({[\frac{3z}{h}+\frac{1}{2}]}^{\upsilon })\,\,\,\,\,0\le \upsilon \le \infty \mathrm{.}$$
(22)
These relations satisfy the VFL i.e.Vf1+Vf2 = 1, where h is the shell thickness and υ denotes the power law exponent. It is presumed that each layer is of thickness h/3. Following are the material parameters: $${E}_{1},{{{\rm{{-}\kern-0.5em \lambda}}}}_{1},{\rho }_{1}\,and\,{E}_{2},{\rho }_{2},{{{\rm{{-}\kern-0.5em \lambda}}}}_{2}$$ for nickel and stainless steel respectively. Then the effective material quantities: $${E}_{fgm},{{{\rm{{-}\kern-0.5em \lambda}}}}_{1fgm}\,and\,{\rho }_{fgm}$$ for one type of the configuration are given as:
$$\begin{array}{rcl}{E}_{fgm} & = & [{E}_{1}-{E}_{2}]\,{si}{{n}}^{2}({[\frac{3z}{h}+\frac{1}{2}]}^{\upsilon })+{E}_{2},\\ {{{\rm{{-}\kern-0.5em \lambda}}}}_{fgm} & = & [{{{\rm{{-}\kern-0.5em \lambda}}}}_{1}-{{{\rm{{-}\kern-0.5em \lambda}}}}_{2}]\,{si}{{n}}^{2}({[\frac{3z}{h}+\frac{1}{2}]}^{\upsilon })+{{{\rm{{-}\kern-0.5em \lambda}}}}_{2},\\ {\rho }_{fgm} & = & [{\rho }_{1}-{\rho }_{2}]\,{si}{{n}}^{2}({[\frac{3z}{h}+\frac{1}{2}]}^{\upsilon })+{\rho }_{2.}\end{array}$$
(23)
From expression (23) at z = −h/6, Efgm = E2, $${{{\rm{{-}\kern-0.5em \lambda}}}}_{fgm}={{{\rm{{-}\kern-0.5em \lambda}}}}_{2}$$, ρfgm = ρ2 and the material properties at z = h/6 becomes:
$$\begin{array}{rcl}{E}_{fgm} & = & [{E}_{1}-{E}_{2}]\,{si}{{n}}^{2}1+{E}_{2}\\ {{{\rm{{-}\kern-0.5em \lambda}}}}_{fgm} & = & [{V}_{1}-{V}_{2}]\,{si}{{n}}^{2}1+{{{\rm{{-}\kern-0.5em \lambda}}}}_{2},\\ {\rho }_{fgm} & = & [{\rho }_{1}-{\rho }_{2}]\,{si}{{n}}^{2}1+{\rho }_{2}.\end{array}$$
Thus the shell is consisted of purely stainless steel at z = −h/6 and the properties of material are combination of stainless steel and nickel at z = +h/6. The stiffness moduli are modified as:
$$\begin{array}{rcl}{a}_{ij} & = & {a}_{ij}(iso)+{a}_{ij}(FGM)+{a}_{ij}(iso),\\ {b}_{ij} & = & {b}_{ij}(iso)+{b}_{ij}(FGM)+{b}_{ij}(iso),\\ {d}_{ij} & = & {d}_{ij}(iso)+{d}_{ij}(FGM)+{d}_{ij}(iso),\end{array}$$
where i = 1, 2, 6 and (iso) represents the internal and external isotropic layers and FGM represents the central functionally graded material layer.
## Results and Discussion
Results for an isotropic cylindrical shell with following edge conditions, simply supported-simply supported ($${{,} \kern -.3em \rm{s}}$$-$${{,} \kern -.3em \rm{s}}$$), clamped-clamped (ς- ς) and clamped-free (ς-$${{,} \kern -.27em \rm{f}}$$), are compared with the results available in open literature to ensure the validity, authenticity and robustness of the current technique. Tables 1 and 2 show the comparisons of frequency parameters with those in the Zhang et al.7 for $${{,} \kern -.3em \rm{s}}$$-$${{,} \kern -.3em \rm{s}}$$ and ς- ς isotropic cylindrical shells. Comparison of natural frequencies (Hz) with those available in Loy & Lam4 for ς-$${{,} \kern -.27em \rm{f}}$$ isotropic cylindrical shell is presented in the Table 3. It can be noticed clearly that the current results are in agreement with the results in open literature.
Table 4 represents the types of three layered FGM cylinder shaped shell by interchanging the FG constituent materials. where Z1, Z2 and Z3 represent Aluminium, Stainless Steel and Nickel respectively. Material properties for the above materials are presented in refs5,19. Different arrangements of thickness for shell layers are presented in Table 5.
Here q1 = h/3, q2 = h/4, q3 = h/2, q4 = h/5, q5 = 3h/5.
Tables 6 and 7 represent natural frequencies (Hz) functionally graded material cylindrical shell versus against n for case-II, type-I & II with different power exponent law γ respectively. In these tables influence of υ is examined which is different for both types. The natural frequencies (Hz) are decreased for type-I and increased for type-II less than 1% when power exponent law increased from υ = 1–20 for n = 1–5. Hence natural frequencies are affected by the configuration of the essential materials in the three layered CS.
Figures 27 represent the natural frequencies (NFs) (Hz) of FGM cylinder-shaped shell against n for different thickness of the central layer under six edge conditions; $${{,} \kern -.3em \rm{s}}$$-$${{,} \kern -.3em \rm{s}}$$, ς-ς, $${{,} \kern -.27em \rm{f}}$$-$${{,} \kern -.27em \rm{f}}$$, ς-$${{,} \kern -.3em \rm{s}}$$ (clamped-simply supported), ς-$${{,} \kern -.27em \rm{f}}$$ (clamped-free), $${{,} \kern -.27em \rm{f}}$$-$${{,} \kern -.3em \rm{s}}$$ (free-simply supported). In Figs 24 Natural frequencies are presented for cylindrical shells of type I. Natural frequencies decrease for n = 2 and starts increase at n = 3 in each case. It is seen that the natural frequencies are minimum for clamped-free edge condition as compare to other five edge conditions and its maximum for free-free end point condition. The behavior of natural frequencies (Hz) remains same for all cases. Natural frequencies decreased <1% when thickness of the shell middle layer increased 66% or 100%. Figures 57 demonstrate the results for cylindrical shells of type-II. It is clearly seen that the natural frequencies are little high for cylindrical shells of type-II as compare to type-I shells.
Figures 813 show the behavior of natural frequencies (Hz) versus n for various L/R ratios and for various edge conditions. It is seen that the natural frequencies (Hz) are decreased when the L/R ratios are increased. When L/R ratios are increased from 10 to 20, 30, and 50 then natural frequencies are decreased 72%, 87% and 95% respectively for n = 1. Natural frequencies (Hz) for different h/R ratios against n are presented in Figs 1419 under six edge conditions. Natural frequencies (Hz) are increased with the increasing h/R ratios. In these figures, frequencies first decreased from n = 1 to 2 then increased from 2 to onwards. Natural frequencies are increased with the increasing h/R ratios from 0.001 to 0.005, 0.005 to 0.01 and 0.01 to 0.02 at n = 2 for different boundary conditions such as for simply supported - simply supported boundary condition 298%, 98% and 100% for ς-ς and f-f boundary conditions 105%, 84% and 95% for ς-$${{,} \kern -.3em \rm{s}}$$ and $${{,} \kern -.27em \rm{f}}$$-$${{,} \kern -.3em \rm{s}}$$ boundary condition 165%, 92% and 98% for ς-$${{,} \kern -.27em \rm{f}}$$ boundary condition 365%, 100% and 100% for respective ratios. Thus Natural frequencies affected significantly by h/R ratios.
## Conclusions
In present study, frequency analysis of three layered FGM cylinder shaped shell is done for different thickness of the shell middle layer. Strain and curvature displacement relationships are adopted from Sander’s theory. To solve the current problem Rayleigh Ritz method is employed. Natural frequencies are examined for six edge conditions. It is noticed that Natural frequencies becomes minimum with the increase in thickness of the shell FGM middle layer. These also decreased with the increased of L/R ratios. When L/R ratios increased 100%, 200% and 500% then natural frequencies decreased 72%, 87% and 95% respectively for n = 1. Frequencies increased with the increased of h/R ratios. Thickness to radius ratios has significant effect on natural frequencies (Hz). | 2023-01-27 02:57:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7534834146499634, "perplexity": 2211.0964677457514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00460.warc.gz"} |
https://study.com/academy/answer/juan-sevez-buys-a-new-computer-priced-at-650-he-makes-a-down-payment-of-15-how-much-of-the-purchase-is-not-paid-for-round-to-nearest-hundredth.html | # Juan Sevez buys a new computer priced at $650. He makes a down payment of 15%. How much of the... ## Question: Juan Sevez buys a new computer priced at$650. He makes a down payment of 15%.
How much of the purchase is not paid for? (Round to nearest hundredth.)
## Down Payment
The down payment is the initial amount that is made to purchase an item. This is a percentage of the total amount. The rest of the amount can then be paid installments.
## Answer and Explanation:
We first need to find the down payment made. The down payment is 15% of the total price, 650. This is: \begin{align} \text{Down Payment}&=15\%*650\\ &=\frac{15}{100}*650\\&=\97.5 \end{align} Thus, the down payment is97.5.
The amount remaining is the amount not paid. This is 650-97.5=\$552.5. | 2020-04-01 08:19:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8236925601959229, "perplexity": 2343.071616741268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00544.warc.gz"} |
http://math.stackexchange.com/questions/62178/showing-2n52-n3/62206 | # Showing $2(n+5)^2 < n^3$
I stumbled upon this in my homework and can't seem to get it, any help would be great.
Find the smallest $n$ within $\mathbb N$ such that $2(n+5)^2 < n^3$ and call it $n_0$. Show that $2(n+5)^2 < n^3$ for all $n \geq n_0$.
-
Can you use mathematical induction? – Doug Spoonwood Sep 6 '11 at 2:50
Yes I can use induction – Steve Sep 6 '11 at 2:53
Then you should use induction! – The Chaz 2.0 Sep 6 '11 at 3:05
The answers show you how to prove that once you have found $n_0$ the inequality is true for all higher $n$. To find $n_0$, you can just compute. A spreadsheet is ideal for this: a column for $2(n+5)^2$ and one for $n^3$, copy down, scan down, and there you are. If you are lazy, another column with the difference of the previous two. When it changes sign... – Ross Millikan Sep 6 '11 at 3:30
HINT $\$ Below are two sketched inductive proofs that $\rm\ f(n)\ =\ n^3- 2\ (n+5)^2\: >\: 0\$ for $\rm\ n \ge 7\:.$
Proof $\:1.$ $\rm\:\ \ f(n+1) - f(n)\ =\ (3\:n+8)\:(n-3)+3\: >\: 0\$ for $\rm\: n \ge 3\:,\:$ so $\rm\:f(n) < f(n+1)\:$ for $\rm\:n\ge 3\:.\:$ Therefore $\rm\ f(6) < 0 < f(7) < f(8) <\:\cdots\: < f(n)\$ for $\rm\:n \ge 7\:.$
Proof $\:2.$ $\rm\ \ f(n)>0\ \Leftrightarrow\ 1 < g(n) = \dfrac{n^3}{2\:(n+5)^2}\:.\:$ $\rm\ g(6) < 1 < g(7)\$ and $\rm\:g(n)\:$ is increasing by
$$\rm \frac{g(n+1)}{g(n)}\ =\ \frac{(n+1)^3}{n^3}\: \frac{(n+5)^2}{(n+6)^2}\ =\ \frac{n+1}{n}\ \bigg(\frac{n^2+6\ n+5}{n^2+6\ n}\bigg)^2\ >\ 1\quad for\quad n > 0$$
Essentially the $1$st proof is by additive telescopy, and the $2$nd by multiplicative telescopy. Notice how telescopy has reduced the induction to a trivial induction. Namely the first proof shows that $\rm\:f(n) > 0$ because it is a sum of terms $> 0\:,\:$ and the second proof shows that $\rm\:g(n) > 1$ because it is a product of terms $> 1\:.\:$ See the above linked posts for more on this viewpoint. The first additive telescopic method is essentially the fundamental theorem of difference calculus (whose proof - unlike the differential calculus form - is utterly trivial).
NOTE $\$ There are ad-hoc methods that are slightly faster than the above methods, e.g.
$$\rm n \ge \:8 \ \Rightarrow\ n^3 \ge \ 8\:n^2 =\: 2\:(n+n)^2 > \:2\:(n+5)^2$$
The point of mentioning the above telescopic techniques is that they have pedagogical value as general techniques for telescopic induction and, moreover, they lead to effective algortihms for much more general problems, e.g. computing closed forms for sums and products.
-
precalculus tag probably indicates OP does not know how to take or use the derivative... – Ben Blum-Smith Sep 6 '11 at 3:11
@Ben The above proofs do not use derivatives or calculus. An earlier version included additionally a continuous form of the first proof, but since that has apparently led to some confusion, I have removed that one proof. – Bill Dubuque Sep 6 '11 at 4:43
Hint: $\frac{2(n+5)^2}{n^3} = \frac{2}{n} + \frac{20}{n^2} + \frac{50}{n^3}$ is a decreasing function of $n$ for $n > 0$.
-
Rewrite $2(n+5)^2 < n^3$ as $2(1+\frac{5}{n})^2 < n$ , now you can see that for $n>5$ the left side is at most 8 so you know that the original form fail for $n=5$ and works for $n\geq8$. Test it for $n=6$ and $n=7$ and you are done.
-
you can see that n=7 is the smallest natural number for which the statement is true.
Now try proving that $7n^2 \ge 2(n+5)^2$ for all $n \ge 7$ using the theory of quadratic expressions.
This is sufficient as $n^3 \ge 7n^2$ for $n \ge 7$
- | 2016-05-05 13:17:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274083137512207, "perplexity": 361.7472292900619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860127407.71/warc/CC-MAIN-20160428161527-00165-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://codeforces.com/blog/entry/53810 | By retrograd, history, 18 months ago, ,
This article will be presenting a rather classical problem that can be solved using deques, along with an extension that allows you to solve the problem in its more general multi-dimensional case. I have decided to write this article after this discussion on 2D range-minimum query.
The article will be mainly based on this following problem:
#### You are given an array of numbers A[] of size n and a number k ≤ n. Find the minimum value for each continuous subarray of size k.
We will be now focusing on the linear-time solution to this problem.
Solution:
Consider sweeping from left to right through the array. At every moment we keep a list of "candidates" for minimum values throughout the process. That means that at each moment, you have to add one element to the list and (potentially) remove one element from the list.
The key observation is that, during the sweep line process, we find two values A[i] and A[j] which have i < j and A[i] ≥ A[j], then we can safely discard A[i]. That is because, intuitively, A[j] will continue to "live" in our sweep line more than A[i], and we will never prefer A[i] instead of A[j].
We should now consider pruning all the "useless" values ("useless" as in the statement above). It is easy to see now that doing this will lead to a strictly increasing list of candidates (why?). In this case, the minimum will always be the first element (O(1) query).
In order to insert an element to the back of the pruned candidate list, we will do a stack-like approach of removing all elements that are greater than it, and to erase on element, we just pop the front of the list (if it is not already removed).
This is a well-known approach for finding minima over fixed-size continuous subarrays. I will now present an extensions that allows you to do the same trick in matrices and even multi-dimensional arrays.
## The multi-dimensional extension
Problem (2D):
#### You are given an matrix of numbers A[][] of size n × m and two numbers k ≤ n, l ≤ m. Find the minimum value for each continuous submatrix of size k × l.
Solution:
Consider the matrix as a list of rows. For each row vector of A, use the 1D algorithm to compute the minimum value over all l-length subarrays, and store them in ColMin[][] (obviously, ColMin[][] is now a n × (m - l + 1)-sized matrix).
Now, consider the new matrix as a list of columns. For each column vector of ColMin, use the algorithm to compute the minimum value over all k-length subarrays, and store them in Ans[][] (of size (n - k + 1) × (m - l + 1)).
The Ans[][] is the solution to our problem.
The following picture shows the intutition behind how it works for computing Ans[1][1] for n = 5, m = 7, k = 3, l = 4
The pseudocode is as follows:
def solve_2d(M, k, l):
column_minima = {} # empty list
for each row in M.rows:
# We suppose we have the algorithm that solves
# the 1D problem
min_row = solve_1d(row, l)
column_minima.append_row(min_row)
ans = {}
for each col in column_minima.cols:
min_col = solve_1d(col, k)
ans.append_col(min_col)
return ans
Note that the pseudocode is (deliberately) hiding some extra complexity of extracting rows / columns and adapting the 1D algorithm to the 2D problem, in order to make the understanding of the solution clearer.
The total complexity of the algorithm can be easily deduced to be O(n * m)
#### Multi-dimensional case analysis
The solution can be extended to an arbitrary order of dimensions. For a d-dimensional matrix of size s1, s2, ..., sd, the time-complexity of the problem is O(d * s1 * ... * sd), and the memory complexity is O(s1 * ... * sd). This is much better than other algorithms that do the same thing on non-fixed size submatrices (e.g. multi-dimensional RMQ has O(s1 * ... * sd * log(s1) * ... * log(sd)) time and memory complexity).
## Finding the best k minima
The deque approach itself is limited in the sense that it allows you to find only the minimum value over the ranges. But what happens if you want to calculate more that one minimum? We will discuss an approach that I used during a national ACM-style contest where we were able to calculate the best 2 minima, and then argue that you can extend to an arbitrary number of minimum values.
In order to store the lowest 2 values, we will do the following:
Keep 2 deques, namely D1 and D2. Do a similar algorithm of "stack-like popping" on D1 when you add a new element, but instead of discarding elements from D1 when popping, transfer them down to D2 and "stack-like pop" it.
It is easy to see why the lowest 2 elements will always be in one of the two deques. Moreover, there are only 2 cases for the lowest two elements: they are either the first two elements of D1, or the first elements of D1 and D2 subsequently. Checking the case should be an easy thing to do.
The extension to an arbitrary number of minima is, however, not so great, in the sense that the complexity of this approach becomes O(n * k2) for a n-sized array, currently bottlenecked by the number of elements you have to consider in order to find the first k minima. [Maybe you can come up with a cleverer way of doing that?]
This is the problem I referred to above: http://www.infoarena.ro/problema/smax. I recommend trying to think it through and implementing it, and translating the statement via Google Translate or equivalent.
•
• +349
•
» 18 months ago, # | +8 Auto comment: topic has been updated by retrograd (previous revision, new revision, compare).
» 18 months ago, # | +23 Another (quite standard) way to solve the original problem:We need a queue with "get a minimum" operation. We can simulate a queue with two stacks. Stack with minima is easy: we just need to store pairs (value, minimum of values below this).Finding best k minima can be solved in the same way. Stack with k minima is not harder: just store k minima of values below this. It obviously work in O(nk).
• » » 18 months ago, # ^ | ← Rev. 2 → +1 That is true, however I have compared some time ago this approach with the deque one, and it seemed that the queue was not only more memory-intensive but also noticeably slower. However, the fact about the complexity is true. At the same time, I realized that the complexity I mentioned was overshot, as you can find out the first k elements of k sorted arrays in O(k * log(k)) (using a priority queue)
» 18 months ago, # | +10 Good problem for practice: IOI 2006 B. Pyramid.
• » » 18 months ago, # ^ | -9 Where can I submit this problem ? Thanks in advance :)
• » » » 18 months ago, # ^ | +12
• » » » » 18 months ago, # ^ | 0 Thanks :)
» 18 months ago, # | 0 Yet another good problem where you can practice is 15D - Map, but be careful if you use Java, because even if you'll try to resubmit accepted solutions, you (in most cases) will have TLE 11.
» 18 months ago, # | ← Rev. 6 → -26 The following is an O( n ) algorithm for solving the 1-dimensional min/max problem without using a deque. pair< long long, long long > min_subarray_sum( const int A[], const size_t n, const size_t k ) { long long initial = A[ 0 ]; for( size_t i = 1; i < k; i++ ) initial += A[ i ]; if ( k < n ) { long long delta = A[ k ] - A[ 0 ]; pair< long long, long long > optimal_delta; for( size_t i = k + 1, j = 1; i < n; i++, j++ ) { delta += A[ i ] - A[ j ]; if ( delta < optimal_delta.first ) optimal_delta.first = delta; if ( delta > optimal_delta.second ) optimal_delta.second = delta; } pair< long long, long long > answer; answer.first = answer.second = initial; if ( optimal_delta.first < 0 ) answer.first += optimal_delta.first; if ( optimal_delta.second > 0 ) answer.second += optimal_delta.second; } return answer;}The 2-dimensional problem can be solved in O( m n ) using Summed-area Tables introduced by Crow in 1984 [1].Reference [1] Crow, F.C., 1984. Summed-area Tables for Texture Mapping, in: Proceedings of the ACM 11th Annual Conference on Computer Graphics and Interactive Techniques, pp. 207–212. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.124.1904&rep=rep1&type=pdf
• » » 18 months ago, # ^ | ← Rev. 4 → 0 It may be helpful and useful that codeforces develops it like/dislike voting if possible to enable a voter to include an optional note explaining the reasons for liking/disliking a comment.
» 18 months ago, # | 0 I used a similar idea to solve Pyramid from IOI2006
» 18 months ago, # | ← Rev. 2 → 0 I solved the same question in JUNE CHALLENGE 2016 on Codechef.ProblemHere is my AC submission using the same idea:Code
» 18 months ago, # | +5 tweety You explained this to me just before the blog was released !!! Wizard
• » » 18 months ago, # ^ | +13
• » » 18 months ago, # ^ | +3 retrograd is working for the NSA confirmed
» 18 months ago, # | ← Rev. 3 → +5 Yet another method for solving the original problem:Let's call every element which has an index special. For every special element Ai we memorise the minimum for subarrays with indices [j, i], where i - k < j ≤ i and (i, j], where i < j < i + k. Since every query has exactly one special element inside its boundaries, we can easily answer the query as query(l, r) = min (query(l, i), query(i + 1, r)), where i is the special element inside the query. This gives linear space and time complexity.This can be extended to any number of dimension d with a complexity of . With k minimums the complexity should be if we use priority queues.As a bonus the same algorithm can also be extended to give complexity for the standard minimum query problem with no constraints for l and r by having special elements with steps 1, 2, 4, 8... (every query can be answered in constant time since there is at least one step where there are between 1 and 2 special nodes inside the boundaries).
» 18 months ago, # | 0 Decompose the matrix into blocks of k × l, and get the prefix sums respecting the upperleft, upperright, lowerleft and lowerright corners. All queries can be calculated using 4 such sums.Of course, one-dimensional version can be solved this way as well.
• » » 18 months ago, # ^ | +8 The downside is that for k-dimensional version a constant factor of 2k is required.
• » » 18 months ago, # ^ | 0 For "best k minima", you can store the k smallest elements in each prefix sum. You can do the addition and subtraction for on a data structure storing "k smallest elements".With an appropriate data structure, I think a complexity of can be reached.
• » » » 14 months ago, # ^ | 0 I don't get how you can find the minima/maxima using prefix sums. Can you explain it a bit?
» 18 months ago, # | ← Rev. 2 → 0 For the "k-minima" case, it may just be better to just use a persistent set data structure (e.g. treap). This lets you find the k-minima for all fixed-size arrays of an n-sized array in time, irrespective of the value of k (though actually getting the minima explicitly takes O(NK) time).
» 18 months ago, # | -11 Please star my projects and contribute if you are interested. 1. https://github.com/ArmenGabrielyan16/DiceRoller 2. https://github.com/ArmenGabrielyan16/SuperLibrary
» 18 months ago, # | +3 Here is a problem to find Minima and Maxima of each sub array of size k in a 1D array of size n: Sliding Window | 2019-02-16 18:48:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4200943410396576, "perplexity": 1038.2700641699153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480905.29/warc/CC-MAIN-20190216170210-20190216192210-00037.warc.gz"} |
http://physics.stackexchange.com/questions/20272/strong-decay-and-parity-conservation | # Strong Decay and Parity Conservation?
The following decay is possible according to the PDG and according to my notes it is a strong decay:
$$\omega(1420) \to \rho^0 + \pi^0$$
The JPC values are:
$\omega(1420)$ 1--
$\rho$ 1--
$\pi$ 0-+
So, all three particles have, for themselves, a parity of -1.
The combined parity on the right side should then be (-1)*(-1)=1. But the left side has a parity of -1. This violates parity, but parity should not be violated in a strong decay.
1) What's going on and where is the error in my argument?
2) How can I calculate the orbital angular momentum the two decay products have in relation to each other?
-
Welcome to Physics.SE, Nx1990. I've replaced your unicode greek letters with LaTeX alike markup for MathJax to render as it allow the use of superscripts. – dmckee Jan 30 '12 at 21:42
Question for the student: how does the parity of a state depend on it's angular momentum quantum number? – dmckee Jan 30 '12 at 21:44
Regarding to your question: I thought parity is an intrinsic property of a particle, and does not depend on the angular momentum. However, I seem to be wrong. There seems to be an additional factor of (-1)^L. Since the omega is a vectormeson, it has spin 1. Because J=1 for the omega, L must be 0. The pion has J=0 and S=0, so L=0. The rho has J=1 and S=0, so L=1. Is that correct? – Nx1990 Jan 30 '12 at 21:51
Now, if that is correct, the rho gets an additional factor of (-1)^1, so the parity of the rho is +1, and parity is conserved again. The relative angular momentum seems to be 1 then?! – Nx1990 Jan 30 '12 at 21:52
Feel free to write it up as an answer: self-answers are allowed and encouraged. Then the votes will tell you if you're right. – dmckee Jan 30 '12 at 23:59
1) I thought parity is an intrinsic property of a particle, and does not depend on the angular momentum. However, I seem to be wrong. There seems to be an additional factor of (-1)^L.
Since the omega is a vectormeson, it has spin 1. Because J=1 for the omega, L must be 0.
The pion has J=0 and S=0, so L=0.
The rho has J=1 and S=0, so L=1.
Now, if that is correct, the rho gets an additional factor of (-1)^1, so the parity of the rho is really +1, and parity is conserved again: (-1) = (+1)*(-1).
2) From the arguments of 1), the relative angular momentum seems to be L_rho - L_pion = 1 - 0 = 1.
- | 2016-05-31 23:56:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194517731666565, "perplexity": 827.9366314932992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053252010.41/warc/CC-MAIN-20160524012732-00128-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://github.com/cbx33/gitt/blob/master/chap8.tex | # cbx33/gitt
Switch branches/tags
Nothing to show
Fetching contributors…
Cannot retrieve contributors at this time
1324 lines (1120 sloc) 67 KB
% chap8.tex - Week 8 \cleardoublepage %\phantomsection \chapter{Week 8 - Patching, Bisecting, Bundling and Submodules} \section{Day 1 - Give a man a patch''} \subsection{Collaborating with outsiders} We have spoken at great length now about rebasing and have seen that it is a very very powerful tool. It can form part of your workflow in your development cycle. However, always heed that warning that should send alarm bells ringing in the back of your mind about rebasing. Rebasing changes the past. Rebasing changes history. As such, it should be used a) with caution, and b) only by people who understand exactly what they are doing. We are going to leave rebasing for a while now, take a quick look at a feature you really should know about and then focus on some of the more advanced features of Git. The following situation occurs fairly regularly for some people. \begin{trenches} John was stroking his chin and looking pensively out of the window when Simon approached his desk. The manager hadn't seen him yet and Simon instinctively swayed a little back and forth, try to make himself known in as subtle a way as possible. Klaus, who was watching from the corner of his eye took a more direct approach. He took the out of date org chart down from the office divider, screwed it up into a ball and launched it at John's head. It struck the manager squarely in the jaw causing him to almost tip from his awkwardly balanced chair. John noticed Simon standing there and looked a little surprised. He then noticed Klaus and in an instant understood the chain of events that had just taken place. Sorry Simon,'' started John, I've been trying to figure out a problem all morning.'' It's no problem.'' Simon pulled up a chair and sat down. I was wondering if you had a few minutes to discuss Luigi?'' \thoughtbreak Well as Luigi is a contractor, he's not going to get access to our repository here to perform commits directly. And he doesn't have the capability, nor do I really want him, making our code available on the internet. But he does have a clone of our repository from last week.'' John understood the problem. Right!'' Have you heard of patching in Git?'' asked John. Simon looked at his shoes, Can't say I have John, sorry.'' John smiled, No worrys. What we can do is get Luigi to generate a patch of his changes. We can then take that patch and apply it to our codebase. Luigi can then just reset his clone when he comes into the office.'' Simon nodded as John continued, Go and ask Martha about it. I think she's pretty hot on these types of things.'' Klaus giggled, Think she's hot eh John?'' The paper was returned. \end{trenches} \index{patching!process}It is a good question though. Sometimes you may have a repository that is either publically available, or made available to a group of people. You do not necessarily want to set up a remote tracking branch and pull changes in from every single contributor. There are two primary reasons for this; \begin{enumerate} \item There are a large number of people submitting small changes to the code. \item There are difficulties in communicating between the two repositories either for security or general reasons. \end{enumerate} In these cases we need another way to apply changes from one branch into another. Many larger open source projects allow contributors to email in patches. Git does have some rather advanced ways of dealing with these types of scenarios. We are going to scratch the surface and look at using three commands \texttt{git apply}, \texttt{git format-patch} and \texttt{git am}. \index{patching!generating}First, let us find a way of generating a patch. Let us take the example we have currently in our repository. Imagine that the \textbf{develop} branch exists on another computer in a clone of our repository. At some point in time, someone cloned our repository. They have the HEAD of our repository at the same point as we do, but they have continued to do some development in a new branch called \textbf{develop}. Now they are ready to give those changes back. Firstly we are going to look at using the \texttt{git diff} tool to generate a patch file which we can apply. \begin{code} john@satsuki:~/coderepo$git checkout develop Already on 'develop' john@satsuki:~/coderepo$ git diff master develop diff --git a/newfile2 b/newfile2 index 3545c1d..ff59f55 100644 --- a/newfile2 +++ b/newfile2 @@ -1,2 +1,3 @@ Another new file and a new awesome feature +newer dev work diff --git a/newfile3 b/newfile3 index 638113c..2e00739 100644 --- a/newfile3 +++ b/newfile3 @@ -1 +1,2 @@ These changes are in the origin +new dev work john@satsuki:~/coderepo$\end{code} That will generate us a diff from the \texttt{develop} to the \texttt{master} branch. We could copy and paste that information from the terminal window into a file, but Linux offers us an easier way of doing this. \begin{code} john@satsuki:~/coderepo$ git diff master develop > our_patch.diff john@satsuki:~/coderepo$cat our_patch.diff diff --git a/newfile2 b/newfile2 index 3545c1d..ff59f55 100644 --- a/newfile2 +++ b/newfile2 @@ -1,2 +1,3 @@ Another new file and a new awesome feature +newer dev work diff --git a/newfile3 b/newfile3 index 638113c..2e00739 100644 --- a/newfile3 +++ b/newfile3 @@ -1 +1,2 @@ These changes are in the origin +new dev work john@satsuki:~/coderepo$ \end{code} \index{patching!applying}So we can see that the file itself has the information we are looking for. Now we can use the \indexgit{apply} tool to actually modify the files in \textbf{master} and bring in the changes that have happened in \textbf{develop}. \begin{code} john@satsuki:~/coderepo$git checkout master Switched to branch 'master' john@satsuki:~/coderepo$ git apply our_patch.diff john@satsuki:~/coderepo$git diff diff --git a/newfile2 b/newfile2 index 3545c1d..ff59f55 100644 --- a/newfile2 +++ b/newfile2 @@ -1,2 +1,3 @@ Another new file and a new awesome feature +newer dev work diff --git a/newfile3 b/newfile3 index 638113c..2e00739 100644 --- a/newfile3 +++ b/newfile3 @@ -1 +1,2 @@ These changes are in the origin +new dev work john@satsuki:~/coderepo$ git commit -a -m 'Updated with patch' [master 81eee9f] Updated with patch 2 files changed, 2 insertions(+), 0 deletions(-) john@satsuki:~/coderepo$git diff develop master john@satsuki:~/coderepo$ \end{code} Of course doing things this way means that we still have to commit our changes. Plus, all of the changes that we have made in the patch are committed in one block. Sure, we could split that using some of the techniques in the After Hours sections, but then we may not always be aware of what should be split where. \subsection{Can we have some order please?} There is another tool that can come to our rescue here. It is primarily used for working with \index{mbox} mailboxes, but it also has some other uses which we will describe here. Would it not be nice to be able to have each commit that we want to use as a patch in a separate patch file. The file \texttt{our\_patch.diff} above contained two commits worth of data. We have access to another tool in our fight against disparate systems. This is the \indexgit{format-patch} command. First we will undo the changes we made previously by resetting the \textbf{master} branch back to its older position and deleting the \texttt{our\_patch.diff} file. \begin{code} john@satsuki:~/coderepo$git reflog show master -n 4 81eee9f master@{0}: commit: Updated with patch f8d5100 master@{1}: commit: Finished new dev 1968324 master@{2}: commit: Start new dev john@satsuki:~/coderepo$ git reset --hard f8d5100 HEAD is now at f8d5100 Finished new dev john@satsuki:~/coderepo$rm our_patch.diff john@satsuki:~/coderepo$ \end{code} We used the \texttt{git reflog} command to show what the last four \textbf{master} HEAD values were. Then we reset the branch back to the point before the \texttt{git apply}. Finally we deleted the patch. \index{patching!multiple file generation}Now let us see how to use the \texttt{git format-patch} command to create multiple patch files. \begin{code} john@satsuki:~/coderepo$git format-patch master..develop 0001-Some-new-dev-work.patch 0002-More-new-deving.patch john@satsuki:~/coderepo$ \end{code} It would appear that the result of this command is that two files have been generated. Let us confirm our suspicions and \texttt{cat} the contents of them to ensure that they contain the data we expect. \begin{code} john@satsuki:~/coderepo$cat 0001-Some-new-dev-work.patch From af3c6d730a8632d99b5626a7c0e921d14af21f50 Mon Sep 17 00:00:00 2001 From: John Haskins Date: Thu, 7 Jul 2011 19:01:59 +0100 Subject: [PATCH 1/2] Some new dev work --- newfile3 | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/newfile3 b/newfile3 index 638113c..2e00739 100644 --- a/newfile3 +++ b/newfile3 @@ -1 +1,2 @@ These changes are in the origin +new dev work -- 1.7.4.1 john@satsuki:~/coderepo$ \end{code} Woah! Hold on a minute. This does not seem to be a normal diff file at all. In fact, that is absolutely right. This is a patch file and the two are not the same. The patch file contains much more information than the simple diff file. For a start we get information about which commit this patch came from, who created it, when and a subject. In fact this looks almost like an email. In fact it is created to resemble a format that would be easily emailable. \index{patching!a range}We have specified a range of commits to the \texttt{git format-patch} command with the parameter \texttt{master..develop}. The format of that parameter should be familar from earlier chapters when we utilised it for commands like \texttt{git diff} and \texttt{git log}. We could now take those files, email them to someone else and they could apply them. Let us learn one more tool, and see how we would apply those patches when they had been received at the other end. \begin{code} john@satsuki:~/coderepo$git am 0001-Some-new-dev-work.patch Applying: Some new dev work john@satsuki:~/coderepo$ git am 0002-More-new-deving.patch Applying: More new deving john@satsuki:~/coderepo$git diff master..develop john@satsuki:~/coderepo$ \end{code} Of course this is just a simple example case and in actual usage there may be cases where conflicts and other complications occur. Looking at a log output, we can see that the original dates and times of the commits are maintained and are not updated. We can ignore this if we wish and use the \texttt{--ignore-date} parameter to use the current date when committing the patch to the repository. \begin{code} john@satsuki:~/coderepo$git log -n4 commit 30900fe1b7e72411dabab8b02070f36e2431f704 Author: John Haskins Date: Thu Jul 7 19:02:15 2011 +0100 More new deving commit a8281fb589e36389cc8cb0da7ebee225b4d1adfc Author: John Haskins Date: Thu Jul 7 19:01:59 2011 +0100 Some new dev work commit f8d5100142b43ffaba9bbd539ba4fd92af79bf0e Author: John Haskins Date: Thu Jul 7 08:39:29 2011 +0100 Finished new dev commit 1968324ce2899883fca76bc25496bcf2b15e7011 Author: John Haskins Date: Thu Jul 7 08:39:07 2011 +0100 Start new dev john@satsuki:~/coderepo$ \end{code} Interestingly if we use our alias for the log command we see something maybe a little unexpected. \begin{code} john@satsuki:~/coderepo$git logg -n6 * 30900fe (HEAD, master) More new deving * a8281fb Some new dev work | * aed985c (develop) More new deving | * af3c6d7 Some new dev work |/ * f8d5100 Finished new dev * 1968324 Start new dev john@satsuki:~/coderepo$ \end{code} Notice that the branch \textbf{master} has not been simply fast forwarded to that of commit of \textbf{develop}. This is because we have not performed a merge, but in a sense we have manually made that changes to the files and created separate commits for them. In this way the commits \textbf{30900fe} and \textbf{a8281fb} are not the same as their \textbf{develop} counterparts. If you intend to use this workflow, it is worth spending some time reading the man page for \texttt{git am} and \texttt{git format-patch} as both of them hold valuable information regarding the customisation and handling of patches and emails. Tamagoyaki Inc. are not going to use this workflow often and so just applying a few patches here and there from contractors using the methods is prefectly acceptable to them. If you were a large open source establishment, or any company that accepts a large number of patches, you may want to take a closer look at how to work these. Now it is time to move on to some more advanced topics within Git, but first a little cleanup. \begin{code} john@satsuki:~/coderepo$rm 0001-Some-new-dev-work.patch john@satsuki:~/coderepo$ rm 0002-More-new-deving.patch john@satsuki:~/coderepo$\end{code} \section{Day 2 - Looking for problems''} \subsection{A problem shared is a problem bisected} \index{bisecting}During most software development, bugs are introduced. Sometimes these bugs are fixed immediately and sometimes they sit there in the code festering away for months on end until someone tests a specific case. Of course it is always best to have test suites and run them regularly against the code base, but on occasions either the test case itself has a bug, or the test case is written in such a way that a particular bug would never present itself. Tamagoyaki Inc. have a fairly rigorous testing procedure. Unfortunately it would seem that one particularly nasty bug has slipped through the cracks. Cue a difficult discussion. \begin{trenches} But what I don't understand John, is that you now know what happened at every step in the process. How can something like this break and you not know about it?'' As always Markus was getting snappy and as always John was having to bite his lip. It's not a question about not knowing about it,'' begain John, The difficulty is knowing what change introduced the problem. We are on such a rapid development schedule that too many things are changing at once.'' Well, this is one of the reasons you guys have spent the last two months getting this version control system running.'' Markus got up and opened the door. I suggest you fix it.'' \thoughtbreak Markus is blaming us for introducing a bug?'' Rob was pretty shocked as he and Simon chatted at the water cooler. More like, Markus believed that a version control system was going to solve all of our problems,'' replied Simon. Rob squinted his face up as a car drove into the buildings car park, showering the room with reflected sunlight. He shielded his eyes. You know I heard there was a tool in Git for helping to find bugs. Think I may take a look over lunch, you know, be a real hero.'' They both chuckled. \end{trenches} It is true that Git does have a very powerful tool for helping to detect revisions that introduced bugs into the system. The tool is called \indexgit{bisect} and it is used to successively checkout revisions from the repository, check them to see if the bug is present and then use that information to determine the revision that is most likely to have introduced the bug. \index{bisecting!simple}Let us assume that the bug in our repository is a fairly simple one. For some bizarre reason our codebase is broken unless the word \texttt{Addition} is present in one of the files. If we run a simple Linux \texttt{grep} command across the files, we can see that the word we are after is not there. However, if we go back to tag \textbf{v1.0a} and run the same command, we can see that the word is there. \begin{code} john@satsuki:~/coderepo$ grep "Addition" * john@satsuki:~/coderepo$git checkout v1.0a Note: checking out 'v1.0a'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at a022d4d... Messed with a few files john@satsuki:~/coderepo$ grep "Addition" * my_third_committed_file:Addition to the line john@satsuki:~/coderepo$\end{code} Notice the warning about checking out a non-branch. This is perfectly normal and should not worry you but please be aware that it is obviously best to have a clean working directory before starting any type of \texttt{bisect} commands. We can see that the string we are looking for is present in the file called \texttt{my\_third\_committed\_file}. As our repository is very small, it would not take us long to go through and check each revision to see when this string was deleted. In fact we have other tools available to search for the adding and removal of strings. For now let us assume that the \emph{bug} is more complicated than this. Let us go back to the facts. \index{bisecting!set good point}\index{bisecting!set bad point}We know that the repository was \textbf{good} at tag \textbf{v1.0a}. We also know that the repository is bad in its current state. By feeding these details to the \texttt{git bisect} command, we can begin a search for the bug. What will happen at each stage is that Git will checkout a revision that it wants us to test and we tell Git if we think that revision is good or bad. \begin{code} john@satsuki:~/coderepo$ git bisect start Already on 'master' john@satsuki:~/coderepo$git bisect good v1.0a john@satsuki:~/coderepo$ git bisect bad master Bisecting: 9 revisions left to test after this (roughly 3 steps) [ed2301ba223a63a5a930b536a043444e019460a7] Removed third file john@satsuki:~/coderepo$\end{code} So we invoke the tool by running \texttt{git bisect start}. After this we tell Git the things that we know. It was good at \textbf{v1.0a}, \texttt{git bisect good v1.0a}. However, it was bad at \textbf{master}, our current revision, \texttt{git bisect bad master}. After this, Git checks out revision \textbf{ed2301b} and tells us that there are \texttt{9} revisions between the two points and that it should take only \texttt{3} more steps to complete. Now we run our test again. \begin{code} john@satsuki:~/coderepo$ grep "Addition" * john@satsuki:~/coderepo$\end{code} \index{bisecting!marking result}As we have no result here, this would be classed as a bad revision and so we mark it as so. \begin{code} john@satsuki:~/coderepo$ git bisect bad Bisecting: 3 revisions left to test after this (roughly 2 steps) [9710177657ae00665ca8f8027b17314346a5b1c4] Added another file john@satsuki:~/coderepo$\end{code} Git now presents us with a new choice and you can see that the number of revisions left to check has decreased dramatically from \texttt{9} to \texttt{3}. We continue marking our revisions as good and bad. \begin{code} john@satsuki:~/coderepo$ grep "Addition" * my_third_committed_file:Addition to the line john@satsuki:~/coderepo$git bisect good Bisecting: 2 revisions left to test after this (roughly 1 step) [cfbecabb031696a217b77b0e1285f2d5fc2ea2a3] Fantastic new feature john@satsuki:~/coderepo$ grep "Addition" * my_third_committed_file:Addition to the line john@satsuki:~/coderepo$git bisect good Bisecting: 0 revisions left to test after this (roughly 1 step) [b119573f4508514c55e1c4e3bebec0ab3667d071] Merge branch 'wonderful' john@satsuki:~/coderepo$ grep "Addition" * my_third_committed_file:Addition to the line john@satsuki:~/coderepo$git bisect good ed2301ba223a63a5a930b536a043444e019460a7 is the first bad commit commit ed2301ba223a63a5a930b536a043444e019460a7 Author: John Haskins Date: Fri Apr 1 07:37:34 2011 +0100 Removed third file :100644 000000 68365cc0e5909dc366d31febf5ba94a3268751c6 0000000000000000000000000000000000000000 D my_third_committed_file john@satsuki:~/coderepo$ \end{code} Oh! Something different. Git has actually finished the bisect and has suggested to us that this commit was responsible for introducing the bug in our code. If we look at the comment it was in this revision that we removed a particular file. This file was the one that contained our special \texttt{Addition} string. Git was right! We screwed up then. At this point we can go back to our \textbf{master} branch and decide what to do about it. \begin{code} john@satsuki:~/coderepo$git branch -v * (no branch) b119573 Merge branch 'wonderful' develop aed985c More new deving master 30900fe More new deving wonderful 4d91aab Updated another file again zaney 7cc32db Made another awesome change john@satsuki:~/coderepo$ git checkout master Previous HEAD position was b119573... Merge branch 'wonderful' Switched to branch 'master' john@satsuki:~/coderepo$\end{code} Notice that at the end of the bisect, Git does not return us to the master branch. We are left in the last tested checked out revision. \subsection{Automating the process} \index{bisecting!automation}So bisecting is a very powerful way of quickly and efficiently finding the point at which bugs were introduced or \index{regression testing}regression testing. Git was spot on when it suggested that that revision was the one responsible for the mistake. Sometimes you may not be able to test a revision that Git checks out for you for other reasons. In this case you can always run \texttt{git bisect skip} to skip that revision. It is all very well being able to run this at each revision Git asks us to but to be honest, if you have 30-40 steps to test and you have to compile code to see if the bug is present it can get a little bit boring. Git has a way of allowing us to test automatically. The example we are going to use is obviously based on a Linux environment, but if you are a developer on a Windows platform, you should have no trouble understanding what is happening here. We are going to create a small shell script that will automatically run our grep test. If the string is found we will exit with a status code of \texttt{0}, indicating that it was successful and if the string is not found, we will exit with a status code of \texttt{123}, indicating that the test was unsuccessful. Git will use these status codes and interpret a code of \texttt{0} as \textbf{good} and a code of \texttt{123} as \textbf{bad}. Below is a copy of our shell script which we have saved as \texttt{test.sh} and have given relevant permissions to allow it to run etc. Notice we have had to exclude our \texttt{test.sh} file from the test, else the string \texttt{Addition} would have been found there which would have returned true every time. \begin{code} john@satsuki:~/coderepo$ cat test.sh #!/bin/bash if grep -q Addition * --exclude=test.sh then echo "Good" exit 0 else echo "Bad" exit 123 fi john@satsuki:~/coderepo$\end{code} Now we invoke \texttt{git bisect} slightly differently by asking it to start and itterate over the revisions \texttt{master} to \texttt{v1.0a}. At this point we have not told Git anything about which revisions are good or bad. \begin{code} john@satsuki:~/coderepo$ git bisect start master v1.0a Bisecting: 9 revisions left to test after this (roughly 3 steps) [ed2301ba223a63a5a930b536a043444e019460a7] Removed third file john@satsuki:~/coderepo$\end{code} Now we ask Git to continue testing, but to run our script at each iteration to determine the success or failure of each checked out revision. \begin{code} john@satsuki:~/coderepo$ git bisect run sh ./test.sh running sh ./test.sh Bad Bisecting: 3 revisions left to test after this (roughly 2 steps) [9710177657ae00665ca8f8027b17314346a5b1c4] Added another file running sh ./test.sh Good Bisecting: 2 revisions left to test after this (roughly 1 step) [cfbecabb031696a217b77b0e1285f2d5fc2ea2a3] Fantastic new feature running sh ./test.sh Good Bisecting: 0 revisions left to test after this (roughly 1 step) [b119573f4508514c55e1c4e3bebec0ab3667d071] Merge branch 'wonderful' running sh ./test.sh Good ed2301ba223a63a5a930b536a043444e019460a7 is the first bad commit commit ed2301ba223a63a5a930b536a043444e019460a7 Author: John Haskins Date: Fri Apr 1 07:37:34 2011 +0100 Removed third file :100644 000000 68365cc0e5909dc366d31febf5ba94a3268751c6 0000000000000000000000000000000000000000 D my_third_committed_file bisect run success john@satsuki:~/coderepo$\end{code} The parameters after the \texttt{git bisect run} tell Git which command we wish to run at each stage. In our case it is \texttt{sh ./test.sh}. You can see Git invoking our \texttt{test.sh} script in each case, and the result of our script, either \texttt{Good} or \texttt{Bad} depending on which was echoed from the result of the grep test. Git has arrived at exactly the same result, but we have had to do nothing other than write a small script. For larger tests, this would have saved us a large amount of work. \begin{trenches} Simon could I have a word?'' It was Rob and he wasn't looking happy. Simon turned to him and grinned, Sure buddy what's up?'' His face dropped when he saw Rob's expression. I think we'd better go grab the meeting room.'' Simon looked confused. I used the bisect tool to find the bug. But you're not gonna like what I found.'' \thoughtbreak Simon how could you have done that?'' John was asking the questions and they were coming thick and fast. I mean changing the API key for the web sevice whilst developing was not a great idea to start with, but committing that to the repository was rediculous.'' Simon sat there with his head in his hands. You know how secret that API key is right?'' Simon nodded. Simon we were supposed to be releasing this repository publically in a few weeks but now that the API is in there we can't do that.'' John I'm really sorry OK.'' Simon was kicking himself for his mistake. John sighed, he had been really angry to begin with but now he was calming down, It's OK Simon, we're all getting used to the repository and version control. Do you think we can fix it?'' \end{trenches} \section{Day 3 - Filtered repos''} \subsection{Looking at a repo with rose tinted glasses} \index{filtering}It does happen. Sometimes when people are under pressure, mistakes are made, just like earlier when we accidently deleted our branch from the repository. This time the mistake is a little more crucial but again it does happen and it sometimes goes a long time before it is noticed. \begin{trenches} So it's been in there for how long?'' asked John. Simon looked pretty sheepish as he mouthed the words, Weeks.'' John bit on the end of the pen in his hand. His teeth chewed into the plastic, deforming the blue lid. Did you find a way of sorting it out yet?'' I think so. It's not ideal, but I think so.'' \end{trenches} It would be useful if we could rewrite the history to remove the information that we wanted to. As it turns out there is a tool that we can use to do this. The \indexgit{filter-branch} allows us to run operations on a branch to rewrite its history. Hopefully you are already remembering about the care we need to take when rewriting history, but sometimes there is a real need to perform some of these operations. Let us take a look at a few examples to see how this can work. We are going to assume that our file \texttt{newfile1} contains some very sensitive information and we wish to remove it completely from the repository. \begin{code} john@satsuki:~/coderepo$ git checkout master Already on 'master' john@satsuki:~/coderepo$ls -la total 40 drwxr-xr-x 3 john john 4096 2011-07-27 19:54 . drwxr-xr-x 32 john john 4096 2011-07-27 19:00 .. -rw-r--r-- 1 john john 35 2011-07-22 07:15 another_file -rw-r--r-- 1 john john 25 2011-07-22 07:15 cont_dev drwxrwxr-x 9 john john 4096 2011-07-27 19:54 .git -rw-r--r-- 1 john john 69 2011-07-27 19:54 newfile1 -rw-r--r-- 1 john john 58 2011-07-22 07:15 newfile2 -rw-r--r-- 1 john john 45 2011-07-22 07:15 newfile3 -rw-r--r-- 1 john john 8 2011-03-31 22:15 temp_file -rwxrwxr-x 1 john john 114 2011-07-21 21:17 test.sh john@satsuki:~/coderepo$ \end{code} As you can see, currently we have \texttt{newfile1} in our tree. We can also use the \texttt{git log} tool to see each commit which has touched that path. \begin{code} john@satsuki:~/coderepo$git log --pretty=oneline master -- newfile1 9cb2af2a00fd2253060e6bf8cc6c377b3d55ecea Important Update d50ffb2fa536d869f2c4e89e8d6a48e0a29c5cc1 Merged in zaney a27d49ef11d9f0e66edbad8f6c7806510ad5b2be Made an awesome change cfbecabb031696a217b77b0e1285f2d5fc2ea2a3 Fantastic new feature 55fb69f4ad26fdb6b90ac6f43431be40779962dd Added two new files john@satsuki:~/coderepo$ \end{code} So there were five commits in the past which have touched that path. In our example we require the removal of this path from the entire history of the repository. As this is a destructive operation that works on the current branch, meaning it will rewrite our branch HEAD, we are first going to switch into a new branch. \begin{code} john@satsuki:~/coderepo$git checkout -b remove_file Switched to a new branch 'remove_file' john@satsuki:~/coderepo$ \end{code} \index{filtering!index}Now we need to run the \texttt{git filter-branch} tool. \begin{code} john@satsuki:~/coderepo$git filter-branch --index-filter 'git rm --cached --ignore-unmatch newfile1' HEAD Rewrite 55fb69f4ad26fdb6b90ac6f43431be40779962dd (6/21)rm 'newfile1' Rewrite 9710177657ae00665ca8f8027b17314346a5b1c4 (7/21)rm 'newfile1' Rewrite 4ac92012609cf8ed2480aa5d7f807caf2545fe2f (8/21)rm 'newfile1' Rewrite cfbecabb031696a217b77b0e1285f2d5fc2ea2a3 (9/21)rm 'newfile1' Rewrite b119573f4508514c55e1c4e3bebec0ab3667d071 (10/21)rm 'newfile1' Rewrite ed2301ba223a63a5a930b536a043444e019460a7 (11/21)rm 'newfile1' Rewrite a27d49ef11d9f0e66edbad8f6c7806510ad5b2be (12/21)rm 'newfile1' Rewrite 7cc32dbf121f2afa8c40337db54bafb26de5b9c4 (13/21)rm 'newfile1' Rewrite d50ffb2fa536d869f2c4e89e8d6a48e0a29c5cc1 (14/21)rm 'newfile1' Rewrite 9cb2af2a00fd2253060e6bf8cc6c377b3d55ecea (15/21)rm 'newfile1' Rewrite 37950f861a3cc0868c65ee9571fc6c491aa689ea (16/21)rm 'newfile1' Rewrite 1c3206aac0fb012bfdaf5ff00e320b565bb89e7d (17/21)rm 'newfile1' Rewrite 1968324ce2899883fca76bc25496bcf2b15e7011 (18/21)rm 'newfile1' Rewrite f8d5100142b43ffaba9bbd539ba4fd92af79bf0e (19/21)rm 'newfile1' Rewrite a8281fb589e36389cc8cb0da7ebee225b4d1adfc (20/21)rm 'newfile1' Rewrite 30900fe1b7e72411dabab8b02070f36e2431f704 (21/21)rm 'newfile1' Ref 'refs/heads/remove_file' was rewritten john@satsuki:~/coderepo$ \end{code} We have passed a few parameters to \texttt{git filter-branch} and we should take a few seconds to discuss this as the syntax may seem a little strange. Firstly we are invoking the \texttt{git filter-branch} tool, that should not be anything new at all. Next, we are passing three parameters to it. The first of these is the type of filter we wish to use. In our case we have used the \texttt{--index-filter} option. More information is available in the Git manual, but in a nutshell we have asked Git to work on the \emph{index} at each commit stage. \index{filtering!tree}There is another similar option called \texttt{--tree-filter}, however care must be taken to distinguish between the two as using \texttt{--tree-filter} checks out the commit at each point in history. This may not sound like a problem, until you discover that as well as checking each revision out, it also automatically adds any untracked files in the working tree and commits them. The next parameter is the actual command that we wish Git to perform on each revision. In this case we want to \texttt{git rm --cached --ignore-unmatch newfile1} each time. We have enclosed the command we wish to run inside quotes so that Git does not get confused with which parameters are part of the \texttt{filter-branch} and which are part of the \texttt{rm}. Using these options we have asked Git to work on just the \emph{index} and not to complain if it can not find the file to delete. Lastly we list the commit range we wish to filter. In this case we have specified the target revision as \texttt{HEAD}. Git will interpret this as meaning everything up to the \texttt{HEAD} revision. As such Git will be rewriting the entire history of the branch. Now if we list the files in the directory, we can see something important has happened. The file that we wanted removed, has gone and \texttt{newfile1} is no more. \begin{code} john@satsuki:~/coderepo$ls -la total 36 drwxr-xr-x 3 john john 4096 2011-07-27 19:53 . drwxr-xr-x 32 john john 4096 2011-07-27 19:00 .. -rw-r--r-- 1 john john 35 2011-07-22 07:15 another_file -rw-r--r-- 1 john john 25 2011-07-22 07:15 cont_dev drwxrwxr-x 9 john john 4096 2011-07-27 19:53 .git -rw-r--r-- 1 john john 58 2011-07-22 07:15 newfile2 -rw-r--r-- 1 john john 45 2011-07-22 07:15 newfile3 -rw-r--r-- 1 john john 8 2011-03-31 22:15 temp_file -rwxrwxr-x 1 john john 114 2011-07-21 21:17 test.sh john@satsuki:~/coderepo$ \end{code} Re-running the log command we ran earlier against our new branch confirms our operation. However checking out the \textbf{master} also confirms that the file is still present elsewhere. \begin{code} john@satsuki:~/coderepo$git log --pretty=oneline remove_file -- newfile1 john@satsuki:~/coderepo$ git checkout master Switched to branch 'master' john@satsuki:~/coderepo$ls -la total 40 drwxr-xr-x 3 john john 4096 2011-07-27 19:54 . drwxr-xr-x 32 john john 4096 2011-07-27 19:00 .. -rw-r--r-- 1 john john 35 2011-07-22 07:15 another_file -rw-r--r-- 1 john john 25 2011-07-22 07:15 cont_dev drwxrwxr-x 9 john john 4096 2011-07-27 19:54 .git -rw-r--r-- 1 john john 69 2011-07-27 19:54 newfile1 -rw-r--r-- 1 john john 58 2011-07-22 07:15 newfile2 -rw-r--r-- 1 john john 45 2011-07-22 07:15 newfile3 -rw-r--r-- 1 john john 8 2011-03-31 22:15 temp_file -rwxrwxr-x 1 john john 114 2011-07-21 21:17 test.sh john@satsuki:~/coderepo$ \end{code} It should be stressed at this point how destructive the \texttt{git filter-branch} command can be to your repository. The \textbf{master} and \textbf{remove\_file} branches have diverged from the point where \texttt{newfile1} was first introduced. Consequently all of our other branches, such as \textbf{zaney} and \textbf{wonderful} still refer to the \textbf{master} branch. We would also have to rewrite those branches too, but because of the rewriting of commit objects, we could lose the relationships between the branches and their ancestors. In short, though it is exceedingly powerful, this type of filtering can cause huge distress to other people working on the project. \begin{trenches} So what do we do?'' asked John. We can't push out the repo as it is because it contains the API key.'' He massaged his forehead moving down to his eyebrows. But we seem to be introducing a real headache if we filter the branch. Any suggestions?'' Well the project is going to be finished in a few weeks right?'' Simon was sitting at the end of the table. He was ashamed and was talking through a pair of hands deperately trying to conceal his identity. Yeh, but what the hell has that got to do with it?'' snorted Klaus. I'm just thinking that we leave the repo like it is until all development has finished,'' he paused to run his hands through his hair, then we filter the branch just before we release it.'' He looked over at John, At that point there shouldn't be any test or dev branches, and we can just get everyone to clone the repo if we need to do anything else.'' John nodded. You know Simon I think you may have just redeemed yourself.'' \end{trenches} \begin{callout}{Note}{Since you've been gone} \index{filtering!purging}Even though we have rewritten our tree, the fact that another branch still has the file present means that our potentially senitive data still exists somewhere inside the repository. In order to truly get rid of the file we would need to not only remove the file from all branches, or delete the branches that contained the file, but also run a few more steps if we wanted to ensure the file was gone \emph{now}. Be aware that these steps are potentially very destructive to a repository. The best way to remove the file completely would be to remove ALL references to the file and then clone the repository. Git will not clone objects into a new repository if nothing references them. Alternatively if you absolutely must work on the current repository, you would need to do the following. \newline \newline Delete the \texttt{filter-branch} backup using \index{git commands!update-ref@\texttt{update-ref}}\texttt{git update-ref -d}. (See the callout on \emph{More backups}) \newline \newline Expire all reflogs with \texttt{git reflog expire --expire=now --all} \newline \newline Repack all of the pack files with \texttt{git repack -ad}\index{git commands!repack@\texttt{repack}} \newline \newline Prune all unreachable objects with \texttt{git prune}\index{git commands!prune@\texttt{prune}} \newline \newline As you can see some of these are quite scary procedures and so it is important that you understand all that you are doing before you do it. \end{callout} The idea being proposed here is only really viable because of Tamagoyaki's situation. The code is due to be finished soon and once that happens, the team have decided to push a rewritten branch into the public domain and to resync all of their development repositories to this new branch. It should be noted that the \texttt{filter-branch} tool can be used in other circumstances too. We are going to take a look at just one of these. However, let us first clean up our repository a little and move some things around. \begin{code} john@satsuki:~/coderepo$mkdir tester john@satsuki:~/coderepo$ ls another_file cont_dev newfile1 newfile2 newfile3 temp_file tester test.sh john@satsuki:~/coderepo$mv test.sh tester/ john@satsuki:~/coderepo$ git mv newfile* tester john@satsuki:~/coderepo$git add tester/test.sh john@satsuki:~/coderepo$ rm temp_file john@satsuki:~/coderepo$git status # On branch master # Changes to be committed: # (use "git reset HEAD ..." to unstage) # # renamed: newfile1 -> tester/newfile1 # renamed: newfile2 -> tester/newfile2 # renamed: newfile3 -> tester/newfile3 # new file: tester/test.sh # john@satsuki:~/coderepo$ git commit -a -m 'Moved testing suite' [master f08ac57] Moved testing suite 4 files changed, 9 insertions(+), 0 deletions(-) rename newfile1 => tester/newfile1 (100%) rename newfile2 => tester/newfile2 (100%) rename newfile3 => tester/newfile3 (100%) create mode 100755 tester/test.sh john@satsuki:~/coderepo$\end{code} We have reverted back to our \textbf{master} branch and in doing so have regained \texttt{newfile1}. After that, we deleted our rewritten branch and moved \texttt{test.sh} along with all of the \texttt{newfile}s into a new folder called \texttt{tester}. \section{Day 4 - Let's make a library''} \subsection{Splitting the atom} Sometimes, after a project has been running for a while certain components actually grow rather useful. When this happens, people often want to move it outside of the original project and maintain it as a separate library. Of course the easiest way to do this is to just copy and paste the files out of the main project and into a subdirectory. In doing this we would lose or disconnect all of the development history of that subproject up to this point. \index{filtering!sub-directory}Using the \texttt{git filter-branch} we can actually pull out a folder and retain all of its history. The methodology behind this is that we rewrite the history to a new branch, but we only pull across changes to a particular folder and we store those in the root of the branch. Let us see how this works with a quick example. Remember we created the \texttt{tester} folder? We are going to make a few commits to the files in this folder to give it some history. \begin{code} john@satsuki:~/coderepo$ echo "More development work" >> tester/newfile1 john@satsuki:~/coderepo$git commit -a -m 'Work on tester nf1' [master 1a4956b] Work on tester nf1 1 files changed, 1 insertions(+), 0 deletions(-) john@satsuki:~/coderepo$ echo "More dev work" >> tester/newfile2 john@satsuki:~/coderepo$git commit -a -m 'Work on tester nf2' [master 7156104] Work on tester nf2 1 files changed, 1 insertions(+), 0 deletions(-) john@satsuki:~/coderepo$ echo "Even more dev work" >> tester/newfile3 john@satsuki:~/coderepo$git commit -a -m 'Work on tester nf3' [master 1433223] Work on tester nf3 1 files changed, 1 insertions(+), 0 deletions(-) john@satsuki:~/coderepo$ \end{code} Now we are going to split that off into a separate branch which we will then clone into a new Git repository. After we have copied the history of the \texttt{tester} folder to a new branch, see if you can run through in your head, the steps we would need to take to pull this branch into a new repository. \begin{code} john@satsuki:~/coderepo$git checkout -b tester_split Switched to a new branch 'tester_split' john@satsuki:~/coderepo$ git filter-branch --subdirectory-filter tester Rewrite 1433223d9c8a8abc35410d12cf78128c318b6e42 (4/4) Ref 'refs/heads/tester_split' was rewritten john@satsuki:~/coderepo$git branch develop master * tester_split wonderful zaney john@satsuki:~/coderepo$ ls newfile1 newfile2 newfile3 test.sh john@satsuki:~/coderepo$git checkout master Switched to branch 'master' john@satsuki:~/coderepo$ ls another_file cont_dev tester john@satsuki:~/coderepo$\end{code} So now the directory has been split away from the original source code into a new branch. Have a think about what steps you would take to bring this into an entirely new repository. \begin{callout}{Note}{More backups} \index{filtering!backup}Git likes to make things easy for you. You may not have noticed it before, but when using the \texttt{git filter-branch} tool to rewrite a branch, Git keeps a backup of the value of HEAD before you started rewriting your branch. This backup is kept in \texttt{refs/original/refs/heads/}. This file will contain a commit ID which we can use to revert our branch back to its original state, if the filter does horribly wrong. \end{callout} \begin{trenches} So John, I managed to split the Atom library out into a new branch like you said, but I have no idea how to pull this into a new repo.'' Jack was finally feeling like he had gotten to grips with Git, but his latest task had left him feeling a little dejected. He idly stabbed at his leg with a pen whilst waiting for John to finish his tapping away. John lifted his keys from the keyboard and turned his chair. You really can't think of a way to copy what we have in one repo into another?'' Suddenly it was like a light bulb had exploded with light inside Jack's skull. "CLONES!" he shouted. \end{trenches} We actually have at least four methods we can use to do this. \begin{enumerate} \item Copy the data from one repo to another with a simple copy and paste \item Clone our repository, delete all of the branches other than \textbf{tester\_split} and then rename it to \textbf{master} \item Initialise a new repository, setup a remote to the original and then fetch our \textbf{tester\_split} branch \item Create a bundle of the \textbf{tester\_split} and then clone from the bundle into a new repository \end{enumerate} The first of these will leave us with no history of development at all, so let us ignore it, as it is not what we require. The second of these is trivial and should require no explanation at all. We simply clone and then using the usual tools, we delete all unnecessary branches. However this first method does have its disadvantages, namely the fact that when we clone the repository, we take every single object from the source repository into the new one. Whilst this is generally not a problem it would mean that we would have to run some fairly aggressive garbage collection to remove all of these unwanted objects. This would happen natually over time as the objects aged and were no longer referenced, but it would result in a repository that was initially much larger than it needed to be. The other two methods deserve a little more consideration as they both perform much better in this respect. The third method you should be familiar enough with previous material to be able to perform right now. However, using the fetch command as we have done so before would again pull in many more objects than we require. As such we are going to do a subtle twist to this command in the following output. \begin{code} john@satsuki:~/coderepo$ cd ../ john@satsuki:~$mkdir subrepo john@satsuki:~$ cd subrepo/ john@satsuki:~/subrepo$git init Initialized empty Git repository in /home/john/subrepo/.git/ john@satsuki:~/subrepo$ git remote add source /home/john/coderepo john@satsuki:~/subrepo$git fetch source +tester_split:master fatal: Refusing to fetch into current branch refs/heads/master of non-bare repository john@satsuki:~/subrepo$ fatal: The remote end hung up unexpectedly john@satsuki:~/subrepo$\end{code} \index{branching!fetch single branch}\index{fetching!single branch}What we have asked Git to do is to pull only the branch \textbf{tester\_split} from the remote we called \textbf{source} and place it into \textbf{master} locally. Think of the \texttt{+:} as \texttt{+:} and all will make sense. As you can see Git is not too happy about our intentions here as it does not like overwriting the \textbf{master} branch of a non-bare repository. That is OK, we have another way around this. \begin{code} john@satsuki:~/subrepo$ git fetch source +tester_split:tmp remote: Counting objects: 15, done. remote: Compressing objects: 100% (14/14), done. remote: Total 15 (delta 3), reused 0 (delta 0) Unpacking objects: 100% (15/15), done. From /home/john/coderepo * [new branch] tester_split -> tmp john@satsuki:~/subrepo$git branch -m tmp master john@satsuki:~/subrepo$ \end{code} So we have almost deceived Git a little here, but I think we can live with ourselves. By first pulling the branch into a \textbf{tmp} branch, we were then allowed to rename it as \textbf{master}. Notice the number of objects required for this branch \texttt{15}. If you remember when we cloned our repository a few \emph{weeks} ago, this value was a lot higher than this. It was the subtle \texttt{+:} which prevented us from pulling every last object from the source repository into our new slim \emph{sub}-repository. \begin{code} john@satsuki:~/subrepo$ls john@satsuki:~/subrepo$ git checkout master Already on 'master' john@satsuki:~/subrepo$ls newfile1 newfile2 newfile3 test.sh john@satsuki:~/subrepo$ \end{code} Notice that there are no files in the repository until we have checked out. This is because all the fetch did was to \emph{fetch} the objects and place them in the repository object directory. It did not place anything in the working directory. If you remember this is same behaviour we saw with fetching before. So now we have a complete copy of our \texttt{tester} component of our repository from the source into a new repository. If we do a \texttt{git log}, we can see the history of the development. \begin{code} john@satsuki:~/subrepo$git log --format=oneline 590e0eb79bc5ba0bc09f611392e643f676b00a04 Work on tester nf3 785b86d877d2a5c0679d98181a23d06ed2ba7652 Work on tester nf2 1ff89f787438f081a0d74de2d26eb2d831c9c738 Work on tester nf1 a5a0d9762dd4b50d8f3228e37b315f6056d5a034 Moved testing suite john@satsuki:~/subrepo$ \end{code} Unfortunately since some of our development work on these files happened outside of this directory, this was lost when splitting and this is something to keep in mind should you ever perform this kind of operation. \subsection{Little bundles of joy} Git has so many ways to do things. \index{bundling}This is in part what makes it a little daunting for those just starting but after you have gained a little experience, you begin to understand just what is happening in the background. When this realisation hits, you are able to almost immediately think of at least two different ways of performing the same thing. There have been numerous examples throughout the book, where there have been multiple ways to complete the same task. Here we are going to look at just one more way that we can create a new repo from our \textbf{tester\_split} branch. The tool we are going to introduce here is \indexgit{bundle}. \index{bundling!creating}\index{bundling!cloning from}The \texttt{bundle} utility allows us to export a set of revisions and archive them to a file. This file then becomes a resource that can be updated and pulled or fetched from. This is especially useful if you have no physical connection between two computers and wish to sync some of the data from one to the other. Let us take a quick look at how we could use the bundle tool in this case. \begin{code} john@satsuki:~/coderepo$git bundle create ../tester.bundle tester_split Counting objects: 15, done. Compressing objects: 100% (14/14), done. Writing objects: 100% (15/15), 1.50 KiB, done. Total 15 (delta 3), reused 0 (delta 0) john@satsuki:~/coderepo$ cd .. john@satsuki:~$git clone tester.bundle subrepo-b Cloning into subrepo-b... warning: remote HEAD refers to nonexistent ref, unable to checkout. john@satsuki:~$ \end{code} The syntax is fairly simple. The word \texttt{create} is used to tell Git to create a new bundle. After this we specify a filename and then the tip of the branch that we want to archive. However, as can be seen above, there is a problem. When we created the bundle, the branch which was checked out at the time was \textbf{master}. The objects we pulled from the source repository and placed in the bundle were all from the \textbf{tester\_split} branch. As such the HEAD of the working tree at the time of the bundle creation, pointed to an object in the \textbf{master} branch. Obviously this object does not exist in our bundle and so Git complains. If we had checked out \textbf{tester\_split} before creating the bundle, there would have been no complaints. So all we have to do is to remap the HEAD of \textbf{master} to that of the HEAD of \textbf{tester\_split}. As you can see below, it seems as if there are no branches at all and when we try to checkout master it does not exist. What actually happened is that the objects were cloned into the repository, but as the object that the source HEAD pointed to was unavailable, no branch was created. With a little \texttt{git reset} trickery, we can create our \textbf{master} branch in our new repository. \begin{code} john@satsuki:~$cd subrepo-b/ john@satsuki:~/subrepo-b$ git branch john@satsuki:~/subrepo-b$git checkout master error: pathspec 'master' did not match any file(s) known to git. john@satsuki:~/subrepo-b$ git reset --hard origin/tester_split HEAD is now at 590e0eb Work on tester nf3 john@satsuki:~/subrepo-b$git checkout master Already on 'master' john@satsuki:~/subrepo-b$ ls newfile1 newfile2 newfile3 test.sh john@satsuki:~/subrepo-b$\end{code} Now we have our repository complete as before and we have successfully reampped the \textbf{master} branch so that it points to \textbf{origin/tester\_split}. \begin{trenches} Martha and John were sitting together in the office. The rest of the team had left hours ago and it was getting really late. Martha broke the silence, So we've pulled the Atom library out,'' she giggled before continuing, but how the heck do we put it back in again?'' I'm really not sure said John,'' taking another swig of coffee before placing the mug back down on the desk. On the side was written the word GIT in large marker pen, a gift from Klaus. Martha sighed. It's getting pretty late John. I think I'm gonna head out.'' Yeh, I know what you mean,'' started John, I think I'll get going too. Thanks for the help Martha.'' Anytime John.'' \end{trenches} \section{Day 5 - Shhh....we're in a library''} \subsection{Nuclear fusion} OK, so we are not quite at the stage of nuclear physics, but it would be nice to know how to bring our library back into our repository. Git offers a tool called \indexgit{submodule}. This tool allows you to link a remote repositories branch and store it under a subdirectory of the project. It does have some nuances which must be learnt, but can be very useful. Let us add our testing suite from the \texttt{subrepo} repository into the directory called \texttt{tester} in our main \texttt{coderepo} repository. First we must remove our \texttt{tester} directory. \begin{code} john@satsuki:~/coderepo$ git checkout master Already on 'master' john@satsuki:~/coderepo$git rm tester/* rm 'tester/newfile1' rm 'tester/newfile2' rm 'tester/newfile3' rm 'tester/test.sh' john@satsuki:~/coderepo$ git commit -a -m 'Removed tester - will be replaced by submodule' [master 5698499] Removed tester - will be replaced by submodule 4 files changed, 0 insertions(+), 20 deletions(-) delete mode 100644 tester/newfile1 delete mode 100644 tester/newfile2 delete mode 100644 tester/newfile3 delete mode 100755 tester/test.sh john@satsuki:~/coderepo$\end{code} We need to define what a submodule actually is. Submodules are tricky to understand and often people use them once and conclude that they are more trouble than they are worth. However, if you take some time to understand what a submodule really is, then they can be very useful to you. A submodule is the inclusion of a repository branch at a specific commit. It is not intended to track the development of the upstream library or module, (see the callout box for an explanation of \emph{upstream}). \begin{callout}{Terminology}{Upstream} \emph{Upstream} refers to the source of a project which may have one or more derivatives which are also distributed. Take the package that was used to build this book for example, \LaTeX. \LaTeX is distributed by the people who developed it as open source software, but it is also included with a number of Linux distributions. The location of the software created by the \LaTeX developers is referred to as the \emph{upstream} project. The projects which include it within their own are what is referred to as \emph{downstream}. Think of it like a river which flows from the source further \emph{upstream}. \end{callout} As we will see, though it can be a little longwinded to actually change the version of the code that the submodule refers to, it actually makes a lot of sense to handle them in this way. If the code in the submodule is being included in your repository, you do not want to run the risk of a change upstream resulting in a broken build for your project. This is why submodules always refer to a single commit. Let us go ahead, create a submodule and then discuss the steps we have taken. \begin{code} john@satsuki:~/coderepo$ git submodule add /home/john/subrepo tester Cloning into tester... done. john@satsuki:~/coderepo$git status # On branch master # Changes to be committed: # (use "git reset HEAD ..." to unstage) # # new file: .gitmodules # new file: tester # john@satsuki:~/coderepo$ git commit -a -m 'Added submodule (subrepo)' [master 2aadc11] Added submodule (subrepo) 2 files changed, 4 insertions(+), 0 deletions(-) create mode 100644 .gitmodules create mode 160000 tester john@satsuki:~/coderepo/tester$\end{code} As you can see we had to perform a number of steps before we obtained the source for the \textbf{subrepo} library in our \texttt{tester} directory. We had to begin by using \indexgit{submodule} to add the upstream repository. The upstream repository is really just like any remote repository we have been using, but we will use the terminology \emph{upstream} to make a distinction. The command \texttt{git submodule add /home/john/subrepo tester} creates a special file in the root of our project called \texttt{.gitmodules}, plus it clones the upstream repository into the folder we specified, in this case \texttt{tester}. Notice that when we ran \texttt{git status}, we saw two new entries, one for \texttt{.gitmodules} and one for \texttt{tester}. Next we have to commit those entries using the standard \texttt{git commit} command. When we do, we see that there is a code in front of \texttt{tester} which is special and tells Git to treat this directory as a submodule. Though the submodule has now been added, it has not yet been initialised. To do this, we run our next set of steps. \begin{code} john@satsuki:~/coderepo$ git submodule init Submodule 'tester' (/home/john/subrepo) registered for path 'tester' john@satsuki:~/coderepo$git submodule update john@satsuki:~/coderepo$ \end{code} Now our submodule has been added and initialised. The update command is used to ensure that the directory \texttt{tester} contains the version of the submodule that we committed earlier. \begin{code} john@satsuki:~/coderepo$cd tester/ john@satsuki:~/coderepo/tester$ ls newfile1 newfile2 newfile3 test.sh john@satsuki:~/coderepo/tester$git log --format=oneline 590e0eb79bc5ba0bc09f611392e643f676b00a04 Work on tester nf3 785b86d877d2a5c0679d98181a23d06ed2ba7652 Work on tester nf2 1ff89f787438f081a0d74de2d26eb2d831c9c738 Work on tester nf1 a5a0d9762dd4b50d8f3228e37b315f6056d5a034 Moved testing suite john@satsuki:~/coderepo$ \end{code} Looking in the directory we can see two things. The first, is that the files present in the \textbf{subrepo} upstream project have now been added. The second, may appear a little suprising to begin with. The \texttt{git log} command actually shows a log for the upstream project, not for the local root project stored in \texttt{coderepo}. In all honesty, the submodule repository is actually just a clone of the upstream project, with a few subtle differences. The information about which upstream url to use for the project can be found in the \texttt{.gitmodules} which we committed earlier. Below is an example of what the file looks like in our current repository. \begin{code} john@satsuki:~/coderepo$cat .gitmodules [submodule "tester"] path = tester url = /home/john/subrepo john@satsuki:~/coderepo$ \end{code} \subsection{Changes down the river} So what happens when we want to pull in changes from the upstream project? Well, you can make your submodule point to whatever commit you like and stay there. As long as you commit your changes in the super project, Git will always allow you to return to that point using the \texttt{git submodule update} command. Let us take a look at how we could pull in some changes into our \texttt{tester} submodule. First, we are going to make a change to our upstream project. \begin{code} john@satsuki:~/coderepo$cd .. john@satsuki:~$ cd subrepo john@satsuki:~/subrepo$ls newfile1 newfile2 newfile3 test.sh john@satsuki:~/subrepo$ echo "Added a new function" > newfile4 john@satsuki:~/subrepo$git add newfile4 john@satsuki:~/subrepo$ git commit -a -m 'Added a new library file' [master 94ad27e] Added a new library file 1 files changed, 1 insertions(+), 0 deletions(-) create mode 100644 newfile4 john@satsuki:~/subrepo$cd .. john@satsuki:~/subrepo$ \end{code} Now that we have a new version of the project, let us try to pull those changes into our superproject. \begin{code} john@satsuki:~$cd coderepo john@satsuki:~/coderepo$ cd tester john@satsuki:~/coderepo/tester$git status # On branch master nothing to commit (working directory clean) john@satsuki:~/coderepo/tester$ git fetch origin remote: Counting objects: 4, done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 1), reused 0 (delta 0) Unpacking objects: 100% (3/3), done. From /home/john/subrepo 590e0eb..94ad27e master -> origin/master john@satsuki:~/coderepo/tester$git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. john@satsuki:~/coderepo/tester$ \end{code} As you can see, we are told that our branch is currently one commit behind that of \textbf{origin/master}. If we want to update our \textbf{master} branch in the submodule, we need to \emph{pull} our changes in, just like a \textbf{real} Git repository. \begin{code} john@satsuki:~/coderepo/tester$git pull Updating 590e0eb..94ad27e Fast-forward newfile4 | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) create mode 100644 newfile4 john@satsuki:~/coderepo/tester$ ls newfile1 newfile2 newfile3 newfile4 test.sh john@satsuki:~/coderepo/tester$cd .. \end{code} Now let us see what happens if we try to update the module. \begin{code} john@satsuki:~/coderepo$ git submodule update Submodule path 'tester': checked out '590e0eb79bc5ba0bc09f611392e643f676b00a04' john@satsuki:~/coderepo$cd tester john@satsuki:~/coderepo/tester$ ls newfile1 newfile2 newfile3 test.sh john@satsuki:~/coderepo/tester$\end{code} Our new changes have disappeared. How odd! Well actually not really. As we stated earlier, when we committed our \texttt{.gitmodules} file along with the \texttt{tester} directory, we not only committed the fact that we required a submodule, we also committed the exact point we wanted that submodule to point to. If we want to change this, then we must commit that as a change. It may seem a little odd that we have to jump through these hoops to get an update to an upstream project, but if you think about it, it actually makes a lot of sense. It means that anyone cloning our repository is sure to get a version of the submodule that we have decided is right for the project. So keeping this in mind, let us walk through a quick example of how we would finish the job and commit a new version of the submodule. \begin{code} john@satsuki:~/coderepo$ cd tester/ john@satsuki:~/coderepo/tester$git pull You are not currently on a branch, so I cannot use any 'branch..merge' in your configuration file. Please specify which remote branch you want to use on the command line and try again (e.g. 'git pull '). See git-pull(1) for details. john@satsuki:~/coderepo/tester$ \end{code} Interesting! What has happened here is that by performing the \texttt{git submodule update} command, we effectively asked Git to checkout a commit. Remember in the past we talked about detached HEAD? This is exactly what Git has done. A submodule spends most of it's life in a detached HEAD state. As we tell Git that we must have the submodule at a specific commit, it means that Git checks out a commit, rather than a branch. If you think about it, this makes sense, we do not want the contents of the module \emph{changing}. So to bring our module up to date, we need to first checkout master. Then we can issue our \texttt{git pull}. \begin{code} john@satsuki:~/coderepo/tester$git checkout master Previous HEAD position was 590e0eb... Work on tester nf3 Switched to branch 'master' john@satsuki:~/coderepo/tester$ git pull Already up-to-date. \end{code} Oh? Should we not have seen some commits pulled in here? Actually, no. We pulled the changes into master earlier, when we ran the \texttt{git pull}. When the module reverted to the earlier commit, \textbf{590e0eb}, it did not affect the master branch at all, as we simply checked out a single commit. So by swiching to \textbf{master}, we have already altered the contents of the submodule directory, as can be seen below. \begin{code} john@satsuki:~/coderepo/tester$ls newfile1 newfile2 newfile3 newfile4 test.sh john@satsuki:~/coderepo/tester$ cd .. john@satsuki:~/coderepo$git status # On branch master # Changes not staged for commit: # (use "git add ..." to update what will be committed) # (use "git checkout -- ..." to discard changes in working directory) # # modified: tester (new commits) # no changes added to commit (use "git add" and/or "git commit -a") john@satsuki:~/coderepo$ \end{code} All we need to do now is to commit the submodule changes into the repository and check that the update yields the new file. \begin{code} john@satsuki:~/coderepo$git commit -a -m 'Up revd upstream module' [master 022a163] Up revd upstream module 1 files changed, 1 insertions(+), 1 deletions(-) john@satsuki:~/coderepo$ git submodule update john@satsuki:~/coderepo$cd tester/ john@satsuki:~/coderepo/tester$ ls newfile1 newfile2 newfile3 newfile4 test.sh john@satsuki:~/coderepo/tester$cd .. john@satsuki:~/coderepo$ \end{code} As you can see, submodules can be rather useful. You can even make changes to the repository in the submodule and commit them locally to perhaps keep changes that you want to make to the submodule. As this is a Git repository in its own right, you can merge \emph{upstream} changes in too! Remember though that if you did make changes, and you committed them to the submodule, if you then issued a \texttt{git submodule update} without first committing your changes in the superproject, your commit would be lost. Of course nothing in Git is ever really lost, but it would be prudent of you to always keep changes you make to submodules in a branch, that way they are easy to bring back if you make a mistake like the one described. With that all said and done, we have finished our tour of the major portions of Git. What follows in the next chapter are some other points that are added more for information on what \textbf{can} be done with Git. \clearpage \section{Summary - John's Notes} \subsection{Commands} \begin{itemize} \item\texttt{git apply } - Applies a patch to the working tree \item\texttt{git reflog show } - Show the reflog only for the specified branch \item\texttt{git format-patch ..} - Create a set of patches of each commit between two points \item\texttt{git am } - Apply a specific patch containing a \emph{format-patch} file \item\texttt{git bisect start} - Begin a bisect session \item\texttt{git bisect good } - Mark a reference as good, during a \texttt{git bisect} \item\texttt{git bisect bad } - Mark a reference as bad, during a \texttt{git bisect} \item\texttt{git bisect start } - Start a bisect session between two known points \item\texttt{git bisect run } - Start an automated run of the bisect tool \item\texttt{git filter-branch --index-filter 'git rm --cached \newline --ignore-unmatch ' HEAD} - Rewrites the current branch to remove file \item\texttt{git filter-branch --subdirectory-filter } - Rewrites the current branch to make subdirectory directory the root of the branch \item\texttt{git fetch +:} - Creates a local branch from the remote branch existing in a remote repository \item\texttt{git branch -m } - Move or rename a branch from old to ne \item\texttt{git bundle create } - Create a bundle file in filename, containing all the objects and references from branch. \item\texttt{git submodule add } - Add a submodule at the directory specified by path \item\texttt{git submodule init} - Initialise any submodules in the super project \item\texttt{git submodule update} - Pull all submodules back to the points that have previously been committed to \end{itemize} \subsection{Terminology} \begin{itemize} \index{Terminology!Patching}\item\textbf{Patching} - A method of distributing changes from someone elses repository without having a line of communication between the two, or without a user having access to commit into the destination repository \index{Terminology!Bundle}\item\textbf{Bundle} - A type of archive file that hold objects and commits and can be pulled from \index{Terminology!Bisect}\item\textbf{Bisect} - A way of progressively searching through a repository to find where bugs were introduced \index{Terminology!Filtering}\item\textbf{Filtering} - Takes a branch and rewrites it according to a set of rules \index{Terminology!Submodule}\item\textbf{Submodule} - Incorporating a remotely reachable project as a subdirectory of a superproject \index{Terminology!Superproject}\item\textbf{Superproject} - A Git repository containing one or more submodules \end{itemize} | 2017-09-20 18:28:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3380988836288452, "perplexity": 3402.050290532922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687428.60/warc/CC-MAIN-20170920175850-20170920195850-00564.warc.gz"} |
http://www.cs.utexas.edu/~pingali/CS377P/2017sp/assignments/hw3/hw3.html | # CS 377P: Programming for Performance
## Assignment 3: Operator formulation of algorithms
### Due date: March 7th, 2017
Late submission policy: Submission can be at the most 2 days late. There will be a 10% penalty for each day after the due date (cumulative).
Clarifications
to the assignment are posted at the bottom of the page.
## Description
This assignment introduces you to the operator formulation of algorithms. The motto introduced in class is Algorithm = Operator + Schedule, and in this assignment, you will implement sequential algorithms for the single-source shortest-path (sssp) problem to understand this motto. Read the entire assignment before starting your coding, and get started early: this assignment requires more programming than previous assignments.
Key concepts
Recall that we classify algorithms into topology-driven and data-driven algorithms.
Topology-driven algorithms make a number of sweeps over the graph. At the start of the algorithm, node labels are initialized as needed by the algorithm (for example, for sssp, the label of the source node is initialized to zero and the labels of all other nodes are initialized to $\infty$). In each sweep, the operator is applied to all nodes. The algorithm terminates when a sweep does not modify the label of any node. In some problems, particularly those in which labels are floating point numbers, we may never get to exact convergence so we terminate the algorithm when node updates are below some threshold or when some upper bound on the number of iterations is reached.
Data-driven algorithms maintain a work-list of active nodes. The work-list can be considered to be an abstract data type (class) that supports two methods: put and get. Active nodes are added to the work-list by invoking the put method with the set of active nodes. The work-list can be maintained either as a set (so no duplicates are allowed) or as a multi-set (duplicates are allowed). In this assignment, work-lists can be implemented as multi-sets so you do not need to check for duplicates. The get method returns an active node from the work-list if it is not empty, and removes it from the work-set. If there are multiple active nodes in the work-list, the schedule determines which one is returned. Applying the operator to an active node may change the labels of other nodes in the graph; if so, these nodes become active and are added to the work-list. For problems in which labels are floating-point numbers, we may choose not to activate a node if the change to its label is below some threshold. Data-driven algorithms terminate when the work-list is empty and all active nodes have been processed.
Graph formats
Input graphs will be given to you in DIMACS format, which is described at the end of this assignment. The output for each algorithm should be produced as a text file containing one line for each node, specifying the number of the node and the label of that node.
• You can find all graphs for this assignment on Stampede here: /work/01131/rashid/class-inputs .
• We have provided the following graphs for sssp: power-law graphs rmat15, rmat20, rmat22, and rmat23, and road networks road-FLA (Florida road network) and road-NY (New York road network). Graphs like rmat22 and rmat23 are quite big so do not do any runs with them until your code has been debugged on some small graphs that you have constructed.
Coding
1. I/O routines for graphs: These routines will be important for debugging your programs so make sure they are working before starting the rest of the assignment.
• Write a C++ routine that reads a graph in DIMACS format from a file, and constructs a Compressed-Sparse-Row (CSR) representation of that graph in memory. Node and edge labels can be ints for the graphs we are dealing with.
• Write a C++ routine that takes a graph in CSR representation in memory, and prints it out to a file in DIMACS format.
• Write a C++ routine that takes a graph in CSR representation in memory, and prints node numbers and node labels, one per line.
2. Data-driven algorithms: Implement a routine that takes a graph G and a work-list w of active nodes as input, and performs a data-driven sssp computation on graph G. By passing different work-lists to this routine as described below, you can implement different data-driven algorithms for sssp without changing the code in your routine. Instrument your code to count the number of node and edge relaxations.
• Graph initialization: read in the graph from the file, create the graph in CSR format in memory, and initialize node labels so that the source node has label 0 and all other nodes are initialized to a large positive number (you can use INT_MAX).
• Chaotic relaxation sssp algorithm:
• Implement a work-list called bag for the work-list. The get method for this work-list should select a random active node from the nodes in the work-list.
• You can use the rand function in C++ to generate random numbers; this webpage shows you how to generate random numbers within a particular range http://www.cplusplus.com/reference/cstdlib/rand/ By using different seeds, you can generate different sequences of random numbers.
• Chaotic relaxation can take a very long time even for small graphs for some schedules of node relaxations. Your code should terminate the computation if the number of relaxations exceeds some bound that depends on the size of the graph.
• Delta-stepping sssp algorithm:
• Implement a work-list implemented as a sequence of bags in which the first bag contains nodes with labels in the interval [0,Δ),$\mathrm{\left[\Delta ,2}$The get method should return a random node from the first non-empty bag. The value of Δ should be a parameter to the constructor for your work-list. For efficiency, your work-list can keep track of the first non-empty bag instead of searching the bags one at a time to find the first non-empty bag.
• Dijkstra's algorithm:
• Setting Δ to one in the delta-stepping algorithm gives you Dijkstra's algorithm. You may get better performance by using a heap to implement the work-list but you do not need to implement this.
Experiments
Data-driven sssp algorithms
• [updated] source node for sssp computation: node 1 for all rmat graphs, node 140961 for road-NY, node 316607 for road-FL. These are the nodes with the highest degree.
• Draw two small graphs with roughly 5 nodes and 20 edges, and generate files for them in DIMACS format. You should use these graphs to debug your code before using the bigger graphs we have provided to you.
• Submit these two graphs with your report.
• Write a routine that traverses a graph in CSR format and determines the number of the node with the largest out-degree. This is an exercise to check that you understand the CSR format and know how to use it for graph algorithms.
• Report this node number for each of the graphs given to you (you should check that this is the same as the source node for sssp described above).
• Chaotic relaxation:
• Experiment with three different seeds for the random number generator.
• Report the running times, the number of node relaxations, and the number of edge relaxations for rmat15. If your code timed out, put some symbol like "*" in the table for that experiment.
• Dijkstra's algorithm:
• Run Dijkstra's algorithm on rmat15 and road-NY.
• Report the number of node relaxations.
• Compute analytically what this number should be, and compare it with the number from your experiment.
• Output the final node labels for both graphs in the format specified in the Graph Formats section of this assignment.
• Delta-stepping:
• Determine experimentally the optimal values of $\Delta$ for rmat15 and for road-NY, and report these in your submission.
• Output the final node labels for both graphs.
• Use the $\Delta$ value you found for rmat15 to perform sssp for all the rmat graphs. Plot a graph in which the x-axis is the number of nodes in the rmat graph and the y-axis is the running time.
• Plot a similar graph for the number of node relaxations.
## Submission
Submit (in canvas) your code and all the items listed in the experiments above.
• Code: 50 points
• Experiments: 50 points
• DIMACS format for graphs
One popular format for representing directed graphs as text files is the DIMACS format (undirected graphs are represented as a directed graph by representing each undirected edge as two directed edges). Files are assumed to be well-formed and internally consistent so it is not necessary to do any error checking. A line in a file must be one of the following.
• Comments. Comment lines give human-readable information about the file and are ignored by programs. Comment lines can appear anywhere in the file. Each comment line begins with a lower-case character c.
c This is an example of a comment line.
• Problem line. There is one problem line per input file. The problem line must appear before any node or edge descriptor lines. The problem line has the following format.
p FORMAT NODES EDGES
The lower-case character p signifies that this is the problem line. The FORMAT field should contain a mnemonic for the problem such as sssp. The NODES field contains an integer value specifying n, the number of nodes in the graph. The EDGES field contains an integer value specifying m, the number of edges in the graph.These two fields tell you how much storage to allocate for the CSR representation of the graph.
• Edge Descriptors. There is one edge descriptor line for each edge the graph, each with the following format. Each edge (s,d,w) from node s to node d with weight w appears exactly once in the input file.
a s d w
The lower-case character "a" signifies that this is an edge descriptor line. The "a" stands for arc, in case you are wondering.
Notes added after assignment was posted:
• (2/21, 2:13 PM): You may use classes from the C++ STL and boost libraries if you wish.
• (2/22, 5:36 PM): I changed the definition of edges in the DIMACS format. Edges in the file start with "a" (for arc).
• (2/25: 12:09PM): Because of the generator used for rmat graphs, the files for some of the graphs may have multiple edges between the same pair of nodes. When building the CSR representation in memory, keep only the edge with the largest weight. For example, if you find edges (s d 1) and (s d 4) for example, keep only the edge weight 4. In principle, you can keep the smallest weight edge or follow some other rule, but I want everyone to follow the same rule to make grading easier. This has been discussed twice in piazza as well but feel free to post there if this is not clear.
• (3/3: 6:04PM): Source nodes for SSSP computations have been updated above and in Piazza.
• (3/4: 2:00PM/8:10PM): Here is the solution to the rmat15 sssp problem. | 2017-10-19 06:07:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4750507175922394, "perplexity": 1175.2159674302654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00509.warc.gz"} |