url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://vhdlweb.com/problem/test_adder
|
Write a testbench to check that the module adder actually performs correctly as a 4-bit adder.
You probably want to use some kind of test generation, rather than writing out all the cases. A VHDL for loop might be useful:
for INDEXVAR in MIN to MAX loop
-- loop body, which can use INDEXVAR
end loop;
Note that INDEXVAR is an integer (not an unsigned or std_logic_vector), so you'll need to convert it to the appropriate type if you hope to do any comparisons. Take a look at the reference sheet for how to do conversions.
-- Testbench for 4-bit adder library IEEE; use IEEE.std_logic_1164.all; use IEEE.numeric_std.all; use std.textio.all; entity adder_test is -- No ports, since this is a testbench end adder_test; architecture test of adder_test is component adder is port( a : in unsigned(3 downto 0); b : in unsigned(3 downto 0); sum : out unsigned(3 downto 0) ); end component; begin end test;
|
2021-06-16 14:18:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31728923320770264, "perplexity": 5562.363028828862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00067.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0B8L
|
Lemma 17.24.3. Let $f : (X, \mathcal{O}_ X) \to (Y, \mathcal{O}_ Y)$ be a morphism of ringed spaces. The pullback $f^*\mathcal{L}$ of an invertible $\mathcal{O}_ Y$-module is invertible.
Proof. By Lemma 17.24.2 there exists an $\mathcal{O}_ Y$-module $\mathcal{N}$ such that $\mathcal{L} \otimes _{\mathcal{O}_ Y} \mathcal{N} \cong \mathcal{O}_ Y$. Pulling back we get $f^*\mathcal{L} \otimes _{\mathcal{O}_ X} f^*\mathcal{N} \cong \mathcal{O}_ X$ by Lemma 17.16.4. Thus $f^*\mathcal{L}$ is invertible by Lemma 17.24.2. $\square$
## Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0B8L. Beware of the difference between the letter 'O' and the digit '0'.
|
2021-12-03 15:40:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8243464827537537, "perplexity": 755.227268337798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362891.54/warc/CC-MAIN-20211203151849-20211203181849-00621.warc.gz"}
|
https://edutray.com/quiz/julie-and-vee/
|
• You can revisit questions and change your answers at any time during the exam.
• The only permitted characters for numerical answers are:
• Numbers
• One full stop as a decimal point if required
• One minus symbol at the front of the figure if the answer is negative.
For example: -10234.35
No other characters, including commas, are accepted.
## Navigating between questions
• Click Next Button to move to the next question.
• Click Pervious Button to move back to the previous question.
• Click on a question number from the Exam Progress Details panel (see next page) to move directly to that question.
• A message will be displayed when you click to move away from a question which has been partially attempted. You can choose to stay on the question and review your answer(s) or continue.
• When reviewing your answer(s) for partially attempted questions ensure you read any message displayed in red text below the question in Section A or below the question
part(s) in Section B
.
JULIE AND VEEThe summarized statement of financial position for Julie Co and Vee Co at 30 November 2009 were as below;
Julie Vee Non current asset PPE 138,000 115,000 Investment 98,000 Current Asset Inventory 15,000 17,000 Receivables 19,000 20,000 Cash 2,000 Total Assets 272,000 152,000 Share Capital of $1 each 50,000 40,000 Retained Earnings 189,000 69,000 Current Liability 33,000 43,000 Total Equity and Liability 272,000 152,000 On 1st May November 2009 Julie Co bought 60% of Vee Co paying$76,000 cash
The following information is relevant
1. At 30th November 2009 the inventory of Vee Co includes goods purchased at a cost of $800,000 from Julie Co at cost plus 25%. None of the goods have been sold on by the reporting date. 2. Julie Co values the non controlling interest using the fair value method. At the date of acquisition the fair value of the 40% non controlling interest was$50,000.
3. Vee Co earned a profit after tax for the year of \$900,000 in the year ended 30th November 2009.
Required,
Consolidated financial position of Julie Group as at 30th November 2009.
(Provide the final amount)
|
2021-06-17 23:23:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19365738332271576, "perplexity": 4941.4235703587265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634576.73/warc/CC-MAIN-20210617222646-20210618012646-00153.warc.gz"}
|
https://blender.stackexchange.com/questions/72809/auto-execution-cycles-node-trees
|
# Auto-execution Cycles Node Trees
I have noticed that Blender doesn't redraw changes in Cycles Node tree when they made with Custom Property via scripted Drivers. I need run animation to see changes in real-time, but it isn't handy always.
For example, Animation Nodes have such feature:
I want to know is it possible to get something similar for Cycles Node Tree natively or with some addons/scripts?
Thank you for any tips and answers.
• What should be the trigger of the execution? why don't we just use animation nodes? I don't know why people still uses drivers? Use AN. – Omar Emara Feb 3 '17 at 8:30
• For example: link def temperatureI3(val): t = bpy.data.objects["Controller"]["Temperature"] colorNode = bpy.data.node_groups["Panel Texture"].nodes["ColorRamp.003"].color_ramp.elements[2].color result = (t-2)/16 print("colorNode is: " % colorNode) for x in range(0, len(colorNode)-1): if result >= 1: colorNode[x] = 1 else: colorNode[x] = 0 return result I like AN too, but not everything may be animated such way. May be I miss something. – M.O.Z.G Feb 3 '17 at 9:17
• Hmmm, I encountered a problem like this before where we wanted to edit a node inside a node group but blender had some problem in the dependencies graph of nodes that limited that kind of actions. If it is up to me, I would find a way around the color ramp node and expose a value to the user. We can mimic the function of Color ramp nodes using math, What are you using it for? – Omar Emara Feb 3 '17 at 9:20
• Another one thing is switchin textures via contionons with Math Nodes: link link I thinking about this problem more. May be it's unreachable, because Blender has problems with node groups. But I have tried to use new depsgraph, and it doesn't work for such cases. – M.O.Z.G Feb 3 '17 at 9:30
• Omar, thanks for tip. Yes this one case could be realized with math node like as via color ramp. But in this case GSLS viewport texture does not update even when animation has started. – M.O.Z.G Feb 3 '17 at 9:40
|
2019-11-12 10:32:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31279659271240234, "perplexity": 2172.726592576331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665521.72/warc/CC-MAIN-20191112101343-20191112125343-00339.warc.gz"}
|
http://tutorial.dask.org/05_futures.html
|
# Futures - non-blocking distributed calculations
## Contents
You can run this notebook in a live session or view it on Github.
# Futures - non-blocking distributed calculations¶
Submit arbitrary functions for computation in a parallelized, eager, and non-blocking way.
The futures interface (derived from the built-in concurrent.futures) provide fine-grained real-time execution for custom situations. We can submit individual functions for evaluation with one set of inputs, or evaluated over a sequence of inputs with submit() and map(). The call returns immediately, giving one or more futures, whose status begins as “pending” and later becomes “finished”. There is no blocking of the local Python session.
This is the important difference between futures and delayed. Both can be used to support arbitrary task scheduling, but delayed is lazy (it just constructs a graph) whereas futures are eager. With futures, as soon as the inputs are available and there is compute available, the computation starts.
Related Documentation
[1]:
from dask.distributed import Client
client = Client(n_workers=4)
client
[1]:
### Client
Client-235e02d6-a89f-11ed-940a-000d3a99faab
Connection method: Cluster object Cluster type: distributed.LocalCluster Dashboard: http://127.0.0.1:8787/status
## A Typical Workflow¶
This is the same workflow that we saw in the delayed notebook. It is for-loopy and the data is not necessarily an array or a dataframe. The following example outlines a read-transform-write:
def process_file(filename):
data = do_a_transformation(data)
destination = f"results/{filename}"
write_out_data(data, destination)
return destination
futures = []
for filename in filenames:
future = client.submit(process_file, filename)
futures.append(future)
futures
## Basics¶
Just like we did in the delayed notebook, let’s make some toy functions, inc and add, that sleep for a while to simulate work. We’ll then time running these functions normally.
[2]:
from time import sleep
def inc(x):
sleep(1)
return x + 1
def double(x):
sleep(2)
return 2 * x
sleep(1)
return x + y
We can run these locally
[3]:
inc(1)
[3]:
2
Or we can submit them to run remotely with Dask. This immediately returns a future that points to the ongoing computation, and eventually to the stored result.
[4]:
future = client.submit(inc, 1) # returns immediately with pending future
future
[4]:
Future: inc status: pending, type: NoneType, key: inc-3f3843be9298a41024496b6107d9fc10
If you wait a second, and then check on the future again, you’ll see that it has finished.
[5]:
future
[5]:
Future: inc status: pending, type: NoneType, key: inc-3f3843be9298a41024496b6107d9fc10
You can block on the computation and gather the result with the .result() method.
[6]:
future.result()
[6]:
2
### Other ways to wait for a future¶
from dask.distributed import wait, progress
progress(future)
shows a progress bar in this notebook, rather than having to go to the dashboard. This progress bar is also asynchronous, and doesn’t block the execution of other code in the meanwhile.
wait(future)
blocks and forces the notebook to wait until the computation pointed to by future is done. However, note that if the result of inc() is sitting in the cluster, it would take no time to execute the computation now, because Dask notices that we are asking for the result of a computation it already knows about. More on this later.
### Other ways to gather results¶
client.gather(futures)
gathers results from more than one future.
## client.compute¶
Generally, any Dask operation that is executed using .compute() or dask.compute() can be submitted for asynchronous execution using client.compute() instead.
Here is an example from the delayed notebook:
[7]:
import dask
def inc(x):
sleep(1)
return x + 1
sleep(1)
return x + y
x = inc(1)
y = inc(2)
So far we have a regular dask.delayed output. When we pass z to client.compute we get a future back and Dask starts evaluating the task graph.
[8]:
# notice the difference from z.compute()
# notice that this cell completes immediately
future = client.compute(z)
future
[8]:
[9]:
future.result() # waits until result is ready
[9]:
5
When using futures, the computation moves to the data rather than the other way around, and the client, in the local Python session, need never see the intermediate values.
## client.submit¶
client.submit takes a function and arguments, pushes these to the cluster, returning a Future representing the result to be computed. The function is passed to a worker process for evaluation. This looks a lot like doing client.compute(), above, except now we are passing the function and arguments directly to the cluster.
[10]:
def inc(x):
sleep(1)
return x + 1
future_x = client.submit(inc, 1)
future_y = client.submit(inc, 2)
future_z = client.submit(sum, [future_x, future_y])
future_z
[10]:
Future: sum status: pending, type: NoneType, key: sum-f5f07602d0c3f0031b8f12726951a0ae
[11]:
future_z.result() # waits until result is ready
[11]:
5
The arguments toclient.submit can be regular Python functions and objects, futures from other submit operations or dask.delayed objects.
Each future represents a result held, or being evaluated by the cluster. Thus we can control caching of intermediate values - when a future is no longer referenced, its value is forgotten. In the solution, above, futures are held for each of the function calls. These results would not need to be re-evaluated if we chose to submit more work that needed them.
We can explicitly pass data from our local session into the cluster using client.scatter(), but usually it is better to construct functions that do the loading of data within the workers themselves, so that there is no need to serialize and communicate the data. Most of the loading functions within Dask, such as dd.read_csv, work this way. Similarly, we normally don’t want to gather() results that are too big in memory.
Let’s imagine a task that sometimes fails. You might encounter this when dealing with input data where sometimes a file is malformed, or maybe a request times out.
[12]:
from random import random
def flaky_inc(i):
if random() < 0.2:
raise ValueError("You hit the error!")
return i + 1
If you run this function over and over again, it will sometimes fail.
>>> flaky_inc(2)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [65], in <cell line: 1>()
----> 1 flaky_inc(2)
Input In [61], in flaky_inc(i)
3 def flaky_inc(i):
4 if random() < 0.5:
----> 5 raise ValueError("You hit the error!")
6 return i + 1
ValueError: You hit the error!
We can run this function on a range of inputs using client.map.
[13]:
futures = client.map(flaky_inc, range(10))
Notice how the cell returned even though some of the computations failed. We can inspect these futures one by one and find the ones that failed:
[14]:
for i, future in enumerate(futures):
print(i, future.status)
0 pending
1 pending
2 pending
3 pending
4 pending
5 pending
6 pending
7 pending
8 pending
9 pending
You can rerun those specific futures to try to get the task to successfully complete:
[15]:
futures[5].retry()
[16]:
for i, future in enumerate(futures):
print(i, future.status)
0 finished
1 finished
2 finished
3 finished
4 finished
5 finished
6 finished
7 finished
8 finished
9 finished
A more concise way of retrying in the case of sporadic failures is by setting the number of retries in the client.compute, client.submit or client.map method.
Note: In this example we also need to set pure=False to let Dask know that the arguments to the function do not totally determine the output.
[17]:
futures = client.map(flaky_inc, range(10), retries=5, pure=False)
future_z = client.submit(sum, futures)
future_z.result()
[17]:
55
You will see a lot of warnings, but the computation should eventually succeed.
## Why use Futures?¶
The futures API offers a work submission style that can easily emulate the map/reduce paradigm. If that is familiar to you then futures might be the simplest entrypoint into Dask.
The other big benefit of futures is that the intermediate results, represented by futures, can be passed to new tasks without having to pull data locally from the cluster. New operations can be setup to work on the output of previous jobs that haven’t even begun yet.
|
2023-03-25 01:56:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2621024250984192, "perplexity": 4514.210972978094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00638.warc.gz"}
|
https://www.physicsforums.com/threads/differentials-in-calculus.801942/
|
Differentials in Calculus.
1. Mar 7, 2015
Timothy S
I have recently come across the use of differentials in visualizing and thinking about calculus. In this method, one thinks of dx/dy as an actual fraction of infinity small yet real numbers. How is it possible to apply this to implicit functions?
2. Mar 8, 2015
pwsnafu
Contradiction in terms. Every real number is finite by definition. What you can do is consider non-real numbers that satisfy $0<\epsilon<r$ for every real number r. This is the core of infinitesimals.
|
2017-10-19 11:30:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5553228259086609, "perplexity": 543.4941586969987}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823282.42/warc/CC-MAIN-20171019103222-20171019123222-00280.warc.gz"}
|
https://www.emathematics.net/g5_multiplication.php?def=three_numbers
|
User:
• Matrices
• Algebra
• Geometry
• Funciones
• Trigonometry
• Coordinate geometry
• Combinatorics
Suma y resta Producto por escalar Producto Inversa
Monomials Polynomials Special products Equations Quadratic equations Radical expressions Systems of equations Sequences and series Inner product Exponential equations Matrices Determinants Inverse of a matrix Logarithmic equations Systems of 3 variables equations
2-D Shapes Areas Pythagorean Theorem Distances
Graphs Definition of slope Positive or negative slope Determine slope of a line Ecuación de una recta Equation of a line (from graph) Quadratic function Posición relativa de dos rectas Asymptotes Limits Distancias Continuity and discontinuities
Sine Cosine Tangent Cosecant Secant Cotangent Trigonometric identities Law of cosines Law of sines
Ecuación de una recta Posición relativa de dos rectas Distancias Angles in space Inner product
Multiply three or more numbers up to 2 digits each
Multiply 12 × 8 × 6 =
First, multiply 12 and 8:
12 × 8 × 6 = ? 96 × 6 = ?
Now multiply 96 and 6:
96 × 6 = 576
Multiply 8 x 1 x 29
Solution:
|
2021-09-18 23:59:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111743927001953, "perplexity": 9450.383674106639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00171.warc.gz"}
|
https://api-project-1022638073839.appspot.com/questions/what-is-the-derivative-of-sqrt-6-x-5-using-the-power-rule
|
# What is the derivative of (sqrt 6)/x^5 using the Power Rule?
$f \left(x\right) = {x}^{- 5} \sqrt{6}$
now derive as ${x}^{n}$ giving $n {x}^{n - 1}$
to get: $f ' \left(x\right) = - 5 {x}^{- 6} \sqrt{6} = - 5 \frac{\sqrt{6}}{x} ^ 6$
where $\sqrt{6}$ is a constant.
|
2021-04-14 05:07:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924822211265564, "perplexity": 1061.296614197831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076819.36/warc/CC-MAIN-20210414034544-20210414064544-00130.warc.gz"}
|
http://matlabtricks.com/post-15/a-nice-solution-on-matlab-cody-problem-73
|
11 October, 2013
# A nice solution on MATLAB Cody problem #73
MATLAB Cody is a useful place not only for having the compelling to solve different interesting problems in the smartest way, but for learning from the solutions of others. Now a very clever solution on Cody problem #73 is discussed. The task is:
Replace NaNs with the number that appears to its left in the row. If there are more than one consecutive NaNs, they should all be replaced by the first non-NaN value to the immediate left of the left-most NaN. If the NaN is in the first column, default to zero.
An example input and output pair is:
x = [NaN 1 2 NaN NaN 17 3 -4 NaN]
y = [ 0 1 2 2 2 17 3 -4 -4]
The straightforward solution is to use a cycle while each NaNs are replaced. In the example below we put first a zero to the beginning of the array to handle the case of the NaNs at the front. Then in a cycle we find the position of the first NaN, and replace it with the previous element:
function x = replace_nans(x)
x = [0 x] % put a zero into the first position
while any(isnan(x)) % loop until there is any NaN
NaNList = isnan(x) % get a logical list of NaNs
NaNPos = min(find(NaNList)) % find the position of the first NaN
x(NaNPos) = x(NaNPos - 1) % new value: the value on the left
end
x(1) = [] % remove the zero from the first position
end
% an example usage of the function
replace_nans([NaN 1 2 NaN NaN 17 3 -4 NaN]);
The output of the function is:
x =
0 NaN 1 2 NaN NaN 17 3 -4 NaN
NaNList =
0 1 0 0 1 1 0 0 0 1
NaNPos =
2
x =
0 0 1 2 NaN NaN 17 3 -4 NaN
NaNList =
0 0 0 0 1 1 0 0 0 1
NaNPos =
5
x =
0 0 1 2 2 NaN 17 3 -4 NaN
NaNList =
0 0 0 0 0 1 0 0 0 1
NaNPos =
6
x =
0 0 1 2 2 2 17 3 -4 NaN
NaNList =
0 0 0 0 0 0 0 0 0 1
NaNPos =
10
x =
0 0 1 2 2 2 17 3 -4 -4
x =
0 1 2 2 2 17 3 -4 -4
Most of the solutions use this approach, but Ankur Pawar had another way of thinking. See his code first, it has been modified a bit for better understanding:
function x = replace_nans(x)
NotNaNs = ~isnan(x) % positions of non-NaN elements
indices = cumsum(NotNaNs) % indexing based on non-NaN positions
% filter for non-NaN elements and put a zero into the first position
NotNaNList = [0 x(NotNaNs)]
x = NotNaNList(indices + 1) % do the re-indexing of the vector
end
% an example usage of the function
x = [NaN 1 2 NaN NaN 17 3 -4 NaN]
replace_nans(x);
The output of the function is:
x =
NaN 1 2 NaN NaN 17 3 -4 NaN
NotNaNs =
0 1 1 0 0 1 1 1 0
indices =
0 1 2 2 2 3 4 5 5
NotNaNList =
0 1 2 17 3 -4
x =
0 1 2 2 2 17 3 -4 -4
This is a very clever solution using no loops:
• First a logical array is generated indicating the non-NaN elements.
• The next step is the key: the cumulated summation. The result of this operation is a list of indices, showing that which non-NaN element shall stay at the given position: in the indices vector from the left to the right the index is inceremented only, when we reach a non-NaN element otherwise it remains the same.
• Then we collect the non-NaN elements and add a zero to handle the case of the first-placed NaNs.
• The re-indexing is left only: we are ready.
A simple but great and useful approach: thanks, Ankur!
Tags:cody
|
2018-04-27 00:58:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2920724153518677, "perplexity": 1442.814851352659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948738.65/warc/CC-MAIN-20180427002118-20180427022118-00408.warc.gz"}
|
https://dsp.stackexchange.com/questions/15853/fft-of-a-pure-sine-wave-is-not-a-single-peak?noredirect=1
|
# FFT of a pure sine wave is not a single peak
For fun, I computed the FFT of a pure sine wave. I chose chose the sample length to be an even multiple of the signal period, so that I don't see windowing-effects. Here is the Matlab code that I used:
% Signal properties
f_signal = 1000;
% Sampling properties
f_sample = 4000;
T_total = 100;
% Calculate signal vector
t = 0:1/f_sample:T_total;
N = length(t);
V = sin(2*pi*f_signal*t);
% Calculate FFT
f_spectrum = (0:N-1)/N*f_sample;
V_spectrum = abs(fft(V))/(N/2);
% Plot
figure(2)
semilogy(f_spectrum, V_spectrum)
set(gca, 'XLim', [f_signal-5, f_signal+5])
xlabel('Frequency in Hz')
ylabel('Normalized amplitude')
But instead of a single peak, I get the following result:
This strikes me as odd for two reasons:
1. The effect is really huge. The center peak has an amplitude of 0.9 instead of 1, and the next bin to the right has an amplitude of 0.3 instead of 0.
2. While I would expect that limited numerical precision broadens my peak a bit, I am quite surprised that the effect is so silky smooth. I always thought of rounding errors as "noise" of the least significant digit. Noise usually leads to jarry spectra.
I observed that if I choose a slightly lower frequency (999.9975) then the effect is minimal. What is going on here?
• I believe you've discovered leakage phenomena, my friend. Welcome to digital domain! – jojek Apr 25 '14 at 19:26
• The latter part of this answer describes essentially the result that you found and the cause thereof. – Dilip Sarwate Apr 25 '14 at 22:07
t = 0:1/f_sample:T_total-1/f_sample;
|
2019-07-16 21:31:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7417310476303101, "perplexity": 1490.6208636972083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524879.8/warc/CC-MAIN-20190716201412-20190716223412-00152.warc.gz"}
|
https://www.zigya.com/study/book?class=12&board=nbse&subject=Chemistry&book=Chemistry+I&chapter=Chemical+Kinetics&q_type=&q_topic=other&q_category=&question_id=CHEN12046496
|
## Book Store
Currently only available for.
`CBSE` `Gujarat Board` `Haryana Board`
## Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
`Class 10` `Class 12`
How can the rate of a fast reaction be determined by Flash Photolysis?
Flash photolysis: When we pass a powerful flash of short duration or laser beam, through the reaction mixture to initiate a reaction the atoms, ions or free radicals are formed.
These atoms, ions or free radicals formed can be identified by passing a second, Flash of light through the mixture immediately after the first flash.
For this the absorption spectrum of the mixture is monitored continuously at small regular intervals after first flash and the changes in the spectrum with time indicate the various processes occurring in the system.
It can also be studied by observing some other property like electrical conductance or magnetic property of the reaction mixture.
126 Views
The conversion of molecules X to Y follows second order kinetics. If concentration of X is increased to three times how will it affect the rate of formation of Y?
Let the reaction is X →Y
This reaction follows second order kinetics.
So that, the rate equation for this reaction will
Rate, R = k[X]2 .............(1)
Let initial concentration is x mol L−1,
Plug the value in equation (1)
Rate, R1 = k .(a)2
= ka2
Given that concentration is increasing by 3 times so new concentration will 3a mol L−1
Plug the value in equation (1) we get
Rate, R2 = k (3a)2
= 9ka2
We have already get that R1 = ka2 plus this value we get
R2 = 9 R1
So that, the rate of formation will increase by 9 times.
Rate = k[A]2
If concentration of X is increased to three times,
Rate = k[3A]2
or Rate = 9 k A2
Thus, rate will increase 9 times.
972 Views
For the reaction R → P, the concentration of a reactant changes from 0.03 M to 0.02 M is 25 minutes. Calculate the average rate of reaction using units of time both in minutes and seconds.
Given that
Initial concentration, [R1] = 0.03
Final concentration, [R2] = 0.02
Time taken ∆t = 25 min = 25 × 6 0 = 1500 sec (1 min = 60 sec )
The formula of average rate of change
(i) Average rate
(ii) Average rate
1717 Views
In a reaction 2A → Products, the concentration of A decreases from 0.5 mol to 0.4 mol L–1 in 10 minutes. Calculate the rate during this interval.
Given that
Initial concentration [A1] =0.5
Final concentration [A2] =0.4
Time is = 10 min
Rate of reaction = Rate of disappearance of A.
Rate of reaction = $-\frac{1}{2}\frac{∆\left[\mathrm{A}\right]}{∆\mathrm{t}}\phantom{\rule{0ex}{0ex}}$
2088 Views
A first order reaction has a rate constant 1.15 x 10–3 s–1. How long will 5 g of this reactant take to reduce to 3 g?
Given that
Initial quantity, [R]o= 5 g
Final quantity, [R] = 3 g
Rate constant, k = 1.15 x 10−3 s−1
Formula of 1st order reaction,
We know that
$\mathrm{t}=\frac{2.303}{\mathrm{k}}\mathrm{log}\frac{{\left[\mathrm{R}\right]}_{0}}{\left[\mathrm{R}\right]}$
or
1299 Views
For a reaction, A + B → Product; the rate law is given by, r = k[ A]1/2 [B]2. What is the order of reaction?
The order of the reaction is sum of the powers on concentration.
So that sum will
r = k[A]
1/2[B]2
Order of reaction =
1504 Views
|
2019-03-25 10:38:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6520159244537354, "perplexity": 3311.69790684415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203865.15/warc/CC-MAIN-20190325092147-20190325114147-00097.warc.gz"}
|
http://math.stackexchange.com/tags/statistics/new
|
# Tag Info
1
So, I will explain a solution that does not introduce a binomial random variable. Concretely, there is only one road that will lead him to the mall and three other roads that will not lead him to the mall, but back to the starting position. The problem is asking: what is the probability $A$ that in his first attempt, he chooses one of these three incorrect ...
0
Presumably the traveler won’t choose the same road again on his second try, so this is sampling without replacement. For the second choice to be correct, the first has to be incorrect, so the probability is, assuming that the choices are uniformly distributed, $\frac34\cdot\frac13=\frac14$.
1
Consistency is an asymptotic property, which roughly asks, as our sample becomes large, does our estimator become accurate. Bias on the other hand, is a not an asymptotic property. The bias tells us, given a sample, how off our estimator is in expectation. There are consistent estimators that are biased and unbiased estimators that are not consistent. Based ...
0
A multilinear map is a function which is linear in all arguments, i.e. of the form (for three vector variables in two dimensions) $$m(\mathbf p,\mathbf q,\mathbf r)=\\Ap_xq_xr_x+Bp_yq_xr_x+Cp_xq_yr_x+Dp_yq_yr_x+Ep_xq_xr_y+Fp_yq_xr_y+Gp_xq_yr_y+Hp_yq_yr_y.$$ It is possible to perform linear regression on such a model, but this is uncommon.
0
Assuming you mean Multivariate Linear Regression by multi-linear coefficient: In Multiple Linear Regression you have multiple predictors /independent variables ($x_1,\cdots,x_n$) and only one depentend variable $y$: $$y=\beta_0+\beta_1x_1+\cdots+\beta_nx_n$$ In Multivariate Linear Regression you have multiple predictors ($x_1,\cdots,x_n$) and multiple ...
0
When you calculate the p do not round up, instead it'll just be (55%*24)/100=13.33 which is the correct answer.
1
Your procedure is fine. Alternately, let $A$ be the event $0.9\le X\le 1$, and let $B$ be the event $0.9\le Y\le 1$. It is clear that $\Pr(A)\ne 0$ and $\Pr(B)\ne 0$, but $\Pr(A\cap B)=0$. So $\Pr(A\cap B)\ne \Pr(A)\Pr(B)$, and therefore $X$ and $Y$ are not independent.
1
a) each flavor must be different and the order of flavors is unimportant? $31! / 3!(28)!$ Yes. $^{31}C_3$ or $\binom{31}{3}$ counts the ways to select 3 unique items from 31. b)each flavor must be different and the order of flavors is important? $31! / (28)!$ Likewise, $^{31}P_3$ or $\binom{31}{3}3!$ is the ways to select ...
1
a)if each couple is to sit together? $$4!~2!^4 = 8\cdot 6\cdot 4\cdot 2$$ $\checkmark$ LHS counts ways to arrange 4 couples, then ways to arrange partners in each couple. RHS counts ways to select a person, their partner, another person, their partner, and so on. Either way is okay (but see part c). b)if all men sit together? ...
1
a) and b) are correct. For c), we will use Inclusion/Exclusion. There are $8!$ arrangements without restriction. From this we need to subtract the number of bad arrangements, where at least one couple are next to each other. First we count the number of arrangements where Couple A are together. Tie them together with rope. There are then $7$ objects to be ...
0
We have been given: $$f_{X,Y}(x,y) = y^{-1}e^{-y} ~[0\leq x\leq y]$$ So then we know: \begin{align}f_Y(y) ~ = ~& \int_0^y y^{-1}e^{-y}~\operatorname d x~~[0\leq y]\\[1ex] =~& e^{-y}~[ 0\leq y]\end{align} Then, by change of variables (chain rule, Jaccobian, etc.): \begin{align}f_{XY\mid Y}(z\mid y) ~=~ & \lvert \dfrac{\mathrm d ... 0 If you have a term in a sum which is independent of the index then it's constant and you can simply factor it out of the sum. Do that with the denominator inside the square brackets: \begin{align} \sum_{i=1}^n\left[\frac{(x_i - \bar x)}{\sum_{j=1}^n (x_j - \bar x)^2}\right]^2 &= \sum_{i=1}^n \frac{(x_i - \bar x)^2}{\left(\sum_{j=1}^n (x_j - \bar ... 0 A much bettter estimate of the ratio of the range over the SD is given by the formula 1.897 + 0.6728*ln (N) where N is the sample size. This was derived from simulating 1000 data set in R. Stephen 1 My answer based on the second version of the original post: According to the central limit theorem, the standard deviation of the sample mean of n data from a population is \sigma_{\overline{X}}=\sigma_X/\sqrt{n}, where \sigma_X is the population standard deviation. In your case, \sigma_{\overline{X}}=40/\sqrt{100}=4. My answer based on the first ... 1 There are \binom{7}{2} equally likely ways to choose 2 people. There are \binom{3}{2} ways to choose two females, and \binom{3}{1}\binom{4}{1} ways to choose one female and one male, and \binom{4}{2} ways to choose two males. Thus the (simplified) probability of two females is \frac{1}{7}, the probability of one of each is \frac{4}{7}, and the ... 0 An error in your calculation: you're using the standard error of the mean (which divides by \sqrt n) when you should use just the plain standard deviation. You can see that 42% is the wrong answer for the probability of 17 or younger, since only 1/6 of the population is below 18. 0 The question is not stupid at all. The R help files are not very useful on this topic IMHO. The answer is somewhat hidden in the following quote from the R help file for anova.lm(): Normally the F statistic is most appropriate, which compares the mean square for a row to the residual sum of squares for the largest model considered. If something is ... 1 (1) The statistic T has Poisson(n\lambda) distribution so the expectation of \delta(T) must involve n\lambda: E[\delta(T)] = \sum_{t=0}^\infty \delta(t)P(T=t)=\sum_{t=0}^\infty\delta(t)e^{-n\lambda}{(n\lambda)^t\over t!} They then abbreviate n\lambda as \gamma for the sake of saving ink. (2) The equality comes from the line above it (which ... 2 Let X_n=B(n,p) be a binomially distributed random variable. Also notice that X_n=Y_1+Y_2+\cdots+ Y_n where Y_i are i.i.d. Bernoulli with parameter p. Now observe that \begin{align} \sum_{k=0}^n k\frac{n!}{k!\,(n-k)!}p^k(1-p)^{n-k}&= \operatorname{E}(X_n)\\ &= \operatorname{E}( Y_1+Y_2+\cdots Y_n)\\ &=\operatorname{E}( ... 12 We have this sum: \sum_{k=0}^n k\frac{n!}{k!\,(n-k)!}p^k(1-p)^{n-k} \tag 1 $$First notice that when k=0, the term k\dfrac{n!}{k!\,(n-k)!}p^k(1-p)^{n-k} is 0, and next notice that when k\ne0 then$$ \frac k {k!} = \frac 1 {(k-1)!} $$so that$$ k\frac{n!}{k!\,(n-k)!}p^k(1-p)^{n-k} = \frac{n!}{(k-1)!(n-k)!} p^k (1-p)^{n-k}. $$The two expressions ... 7 The trick is the using the identity k { n \choose k} = n {n-1 \choose k-1}.$$\begin{align*} &\sum_{k=1}^n k { n \choose k } p^k (1-p)^{n-k}\\ &= \sum_{k=1}^n n { n-1 \choose k-1} p^k (1-p)^{n-k}\\ &=np \sum_{k=1}^n {n-1 \choose k-1} p^{k-1} (1-p)^{(n-1)-(k-1)}\\ &=np (p+(1-p))^n\\ &=np \end{align*}2 The error is that you have have assumed that the distribution of \frac{V_1}{V_1+V_2} is distributed as F(5,14). This is not the case, as V_1 and V_1+V_2 are not independent (have a look at the Characterisation section in here) - on the other hand, \frac{V_1/5}{V_2/9} would be distributed as F(5,9). Going back to the problem at hand, we can ... 1 There are 10! possible arrangements for the first person, and any of these will do. For the second person to match the first person's order, they must arrange the numbers in the exact order that person 1 did, meaning there is a probability of 1 in 10! that they are in the same order. So yes, you are correct. Mathematically, the most intuitive way to ... 2 For a), what about x<0 and x>1?. For c), use the rule \begin{align*} \mathbb{P}(A | B ) = \frac{\mathbb{P}(A \cap B)}{\mathbb{P}(B)}. \end{align*} And for d), you have for all n \in \mathbb{N}, so in particular for n=1,2 \begin{align*} \mathbb{E}(X^n) = \int_{-\infty}^\infty x^n f(x) \text{d}x = \int_0^1 x^n f(x) \text{d}x. \end{align*} 0 Use the formula Pr({X=x| Y=0 }) = \frac{Pr({X=x \cap Y=0})}{Pr(Y=0)} \begin{align} Pr(Y=0) = 20/50 \\ \end{align} $$Pr({X=x \cap Y=0}) is the first row in the table. This results in the probabilities 15/20, 2/20 and 3/20 respectively. 1 The authors' claim can be proved using the Skorohod representation theorem. To prove one direction: if the ratio R_n:=S_n/V_n converges in distribution to a limit R, then there exist variables R_1^*, R_2^*, \ldots and R^* with the same distributions as R_1, R_2,\ldots and R respectively such that R_n^*\to R^* almost surely. The identity$$ T_n ...
1
The authors reference a 1969 paper by Efron. The relevant section in Efron appears to be a reference to what at that time was an unpublished paper by Logan, Mallows, Rice and Shepp (see p.16, last paragraph of Efron). However, the article you are reading give the actual paper, published in 1973 :Limit distributions of self-normalized sums. They reference ...
0
Suppose $U_1 = n$. That event is more probable if the $Y$s are smaller than they are in a typical sample. That the $Y$s are smaller than they are in a a typical sample makes it more probable than it would otherwise be that $U_2= n$. Therefore $U_1, U_2$ are not independent.
1
community wiki answer so the question can be closed Your solutions are correct.
0
Let event A=The player is using the drug B=The player is not using the drug P=The test result is positive. N=The test result is negative. We have $Pr(A)=0.03, Pr(B)=0.97$ and $Pr(P)+Pr(N)=1$ We also know $Pr(P|A)=0.93$ and $Pr(N|B)=0.98$. (a) By Law of Total Probability, we have ...
0
1) What do you know? The first step is to summarise the information provided using the usual probability symbols. Let $T$ be the event of a positive test; $D$ be the event of using; and $T^\complement,D^\complement$ their complements. Then we have been told: \begin{align}\mathsf P(T\mid D)=0.93 \\ \mathsf P(T^\complement\mid D^\complement)=0.98 ... 0 For a), just use the basic formula P(A\cup B)+P(A\cap B)=P(A)+P(B). For b), just write what the conditional probability P(B|A) is (by formula), use a) and you're done. 2 We see that by inclusion-exclusion,P(B \cap A) = P(B) + P(A) - P(A \cup B)= .36 + .45 - .55 = .26,$$So you ar correct about that. Since P(A) \cdot P(B) = .36 \cdot .45 = .162 \neq .26., It is apparent that P(A) \cdot P(B) \neq P(A \cap B), and so A and B are not independent. 1 There is an error in your calculations involving \hat\mu_j and \hat\mu_i (i.e., the last three terms in your expansion). Using a non-clashing index of summation, we have$$ \hat\mu_j=\frac1n\sum_kx_{kj}, $$so that$${\rm Cov}(x_{li},\hat\mu_j)=\frac1n\sum_k {\rm Cov}(x_{li},x_{kj}).\tag1The terms in the sum (1) are zero when k\ne l (by ... 0 For a, you draw four cards the second time (so Y_2=4) unless there is at least one 2 in the first draw. What is the chance of no 2's in the first draw? What is the chance of one 2 in the first draw? In that case you have Y_2=3 and so on. Your result should just be four numbers. As the problem is stated, if you draw four twos on the first draw, ... 0 Here is a simulation using R statistical ssoftware of a million performances of this experiment, where P(Heads) = .3 for the biased coin. Results should be accurate to a couple of decimal places. You can use them as a 'reality check' for your work. m = 10^6; x = y = numeric(m) for(i in 1:m) { x[i] = rbinom(1, 2, .5) y[i] = rbinom(1, x[i], .3) } ... 0 First calculate out Cov(X,Y) using Cov(X,Y)=\sum(X-\mu_X)(Y-\mu_Y)f(X,Y) where f(X,Y) is the correspondig pdf. Then use the formula of correlation coefficient: cor(X,Y)=\frac{Cov(X,Y)}{\mu_X\mu_Y}You can take a look at this example: https://onlinecourses.science.psu.edu/stat414/book/export/html/94 However, I think you made a few mistake in part a) and ... 1 Let me address your confusion about the argument. The density of the product of two independent random variables is not their convolution of their densities. It is more complicated. And it does not, therefore, correspond to the additive framework after you take the Fourier transform. This is possible, but it requires the Mellin transform. 0 Indeed, one must need the joint distribution. You can make further assumptions by introducing a copula https://en.wikipedia.org/wiki/Copula_(probability_theory) -1 for analog continous signals, we have time average.Time average is averaged quantity of a single system over a time inetrval directly related to a real experiment. or discrete signals, we have ensemble average. Ensemble average is averaged quantity of a many identical systems at a certain time. 0 The CLT should work very well for the sum of 10 uniform distributions. As a check, here is a brief simulation in R which should approximate the answer to a couple of decimal places--without direct appeal to the CLT. m = 10^6; n = 10; x = runif(m*n) DTA = matrix(x, nrow=m) # each row a sample of 10 uniforms s = rowSums(DTA) mean(s > 7) ## 0.013626 ... -1 have you tried this? any way I don't know if it works just hope it helps $$P(Z-Y< t)= \int P(Z-Y<t \lvert Y=y) P(Y=y) dy = \int P(Z<y+t) P(Y=y) dy$$ If you could find the distribution function of Z-Y you would be able to determine it as a specific random variable. I mean you may want to work with distribution function ... 2 Here is the current edition of the question: If X is a symmetric n-dimensional random vector with mean 0 then is it true that: \begin{align*} & X \text{ follows a multivariate normal law} \\ & \text{iff} \\ & \|X\| \text{is a chi random variable with n degrees of freedom?} \end{align*} Let X=(X_1,\ldots,X_n). As phrased above, ... 1 In general it depends on the string. P_1 does not depend on your chosen string. It can be expressed as 1-\left(1-2^{-N}\right)^M. 2^{-N} is the probability of matching each element of B_1, so \left(1-2^{N}\right)^M is the probability of not matching any of the M elements of the array. Consider for simplicity the case where M=N=2. In this ... 1 It seems likely that your question will be closed. But I think I see some of the issues that are causing you trouble. So I will try to give you some detailed help in case 'Answers' get shut down. Misprint rate for one page. In a Poisson problem, you need to make sure the rate \lambda matches the random variable of the problem. @Henry is right that the ... 1 \Phi(.)'s are not uniform on [0,1] if \mu \neq \beta. This idea comes from the fact that: Y=F(X) \sim Unif[0,1] if F is a CDF of X. In your case, \Phi(X-\mu) is CDF of Z \sim N(\beta-\mu,1). So at least the drift \mu matters in this expectation, that can be interpreted as expectation E[g(x)] of the function g(x) = ... 2 The error is that the first term should have z-x while the second should have x-z, so that the integrand is always positive. Then you get that result using integration by parts. For the first term:\int_{-\infty}^z (z-x) \,dF(x) = (z-z) F(z) - \lim_{y \to -\infty} (z-y) F(y) + \int_{-\infty}^z F(x) \,dx.$$The first term is trivially zero, the second ... 1 HINT: You made a mistake in this step:$$P((y-2)^2>a^2z)=0.9 \implies P\left(3\frac{(y-2)}{3}>a\sqrt{\frac{z}{7}}\sqrt{7}\right)=0.9$$What you should have found was:$$P((y-2)^2>a^2z)=0.9 \implies P\left(3\left|\frac{(y-2)}{3}\right|>\left|a\right|\sqrt{\frac{z}{7}}\sqrt{7}\right)=0.9 If we simplify this and assume $a\geq 0$, then we get: ...
0
The observed data implies $r^2\ge3^2+1^2=10$, so $r\ge\sqrt{10}$. Thus the hypothesis $r\le2$ has already been refuted. The likelihood function for $r$ is now $f(r)=\frac{\sqrt{10}}{r^2}$ for $r\ge\sqrt{10}$ and $0$ otherwise.
0
Since a sum of independent poisson variables (which is going to be my assumption) is poisson distributed, the total number of accidents in a year is poisson distributed with mean $12\times 10$. Fromt the CDF of this distribution, you should find your answer immediately.
Top 50 recent answers are included
|
2016-05-25 03:32:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9465864896774292, "perplexity": 458.7207589049066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.11/warc/CC-MAIN-20160524002114-00221-ip-10-185-217-139.ec2.internal.warc.gz"}
|
http://mathoverflow.net/questions/18513/k-theory-as-a-generalized-cohomology-theory?sort=oldest
|
# K-theory as a generalized cohomology theory
Which of the statements is wrong:
1. a generalized cohomology theory (on well behaved topological spaces) is determined by its values on a point
2. reduced complex $K$-theory $\tilde K$ and reduced real $K$-theory $\widetilde{KO}$ are generalized cohomology theories (on well behaved topological spaces)
3. $\tilde K(*)= \widetilde{KO} (*)=0$
But certainly $\tilde K\neq \widetilde{KO}$.
-
1 is doubly wrong. First, you need to distinguished generalized cohomology theories and reduced generalized cohomology theories. If you want to work with the latter, you should replace "a point" in 1 by "$S^0$", and then the corrected version of 3 no longer holds. But even this new version 1' is false; a generalized cohomology theory is not determined by its coefficients, unless they are concentrated in a single degree (example: complex K-theory vs. integer cohomology made even periodic).
We're thinking of generalized cohomology theories as taking values in graded abelian groups. Thus $\tilde{K}^q(S^0)\not\approx \tilde{KO}^*(S^0)$, for instance if $q\equiv -1,-2,-6\mod 8$. – Charles Rezk Mar 17 '10 at 19:51
But the statement about cohomology theories being determined by their coefficients is not totally wrong -- if you have a natural transformation $K \to L$ of homology theories which induce an isomorphism on the point (or $\mathbf{S}^0$ in the case of reduced theories), then it's an isomorphism. But you do need to have a natural transformation in the first place. – Tilman Mar 17 '10 at 22:08
|
2015-02-27 07:44:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7627041339874268, "perplexity": 376.6709883769475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936460577.67/warc/CC-MAIN-20150226074100-00237-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://email.esm.psu.edu/pipermail/macosx-tex/2020-June/057047.html
|
[OS X TeX] toolbar
Robert Bruner robert.bruner at wayne.edu
Sun Jun 21 13:45:28 EDT 2020
\begin{rant}
This is an interesting philosophical issue on which I am completely at odds with the software culture. The version of TeXShop that I have does everything I need. I am not interested in spending time reconfiguring things and learning new ways to do the same things I already know how to do. I would rather do mathematics than fine tune my computer's software.
But, I have no choice: if I don't constantly update things I get hit by issues like this.
In mathematics, once something is proved, it is proved and we can move on to new issues. Of course, we return to issues and refine our understanding, clean up proofs, find new and better axioms, etc., but the pace is slower, and the old proofs are still valid if they ever were.
Would it be possible to design software for similar backward compatibility. Experience suggests not, but 'provably correct' software might enable this.
\end{rant}
\begin{counter-rant}
TeXShop is great and I am earnestly grateful for the effort that has gone into making it such an excellent tool.
\end{counter-rant}
Regards,
Bob Bruner
PS: 2016 was only 4 years ago.
________________________________________
From: MacOSX-TeX <macosx-tex-bounces at email.esm.psu.edu> on behalf of Herbert Schulz <herbs at wideopenwest.com>
Sent: Sunday, June 21, 2020 10:55 AM
To: List TeX on Mac OS X Mailing
Subject: Re: [OS X TeX] toolbar
> On Jun 21, 2020, at 8:31 AM, Robert Bruner <robert.bruner at wayne.edu> wrote:
>
> Mac OS 10.15.4
> TeXShop 3.6.2
> Yes, I copied my files from my existing, 2013 vintage, MacBookPro.
> Not sure about the single/multiple window mode: I have one
> window with the .tex file and a separate preview window for the pdf.
> The .tex window is the one which keeps coming up w/o the typeset button. If this can't be fixed, I will probably just convert to always using Cmd-T to typeset, which works fine.
>
> I was vague in my first post because i didn't want to admit how old my TeXShop was :-(.
>
> Bob
>
>
> ________________________________________
> From: MacOSX-TeX <macosx-tex-bounces at email.esm.psu.edu> on behalf of Herbert Schulz <herbs at wideopenwest.com>
> Sent: Sunday, June 21, 2020 9:20 AM
> To: List TeX on Mac OS X Mailing
> Subject: Re: [OS X TeX] toolbar
>
>> On Jun 21, 2020, at 7:34 AM, Robert Bruner <robert.bruner at wayne.edu> wrote:
>>
>> I just moved to a new MacBook Pro. Now, when TeXShop starts,
>> the 'Typseset' button is missing. I can go to Window > Customize Toolbar, to add 'Typeset', but next time I start TeXShop, the 'Typeset'
>> button is again missing.
>>
>> What might I be doing wrong?
>>
>> Bob Bruner
>> Math
>> Wayne State
>
> Howdy,
>
> I think it would be helpful knowing the following information:
>
> what version of TeXShop you are using;
> what macOS version you are using (can I assume Catalina?);
> are you using Single or Multiple Window Mode; and,
> did you carry the preferences over from another computer.
>
> Good Luck,
>
> Herb Schulz
> (herbs at wideopenwest dot com)
>
Howdy,
Really, TeXShop 3.6.2? TeXShop is presently at version 4.44! Please go to <https://pages.uoregon.edu/koch/texshop/texshop.html> and download and install the latest version. There are probably a bunch of things that have changed so you may have to do some re-configuring.
Good Luck,
Herb Schulz
(herbs at wideopenwest dot com)
----------- Please Consult the Following Before Posting -----------
TeX FAQ: http://www.tex.ac.uk/faq
List Reminders and Etiquette: https://sites.esm.psu.edu/~gray/tex/
List Archives: http://dir.gmane.org/gmane.comp.tex.macosx
https://email.esm.psu.edu/pipermail/macosx-tex/
TeX on Mac OS X Website: http://mactex-wiki.tug.org/
List Info: https://email.esm.psu.edu/mailman/listinfo/macosx-tex
|
2021-07-28 11:31:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.582915723323822, "perplexity": 5913.059988512194}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153709.26/warc/CC-MAIN-20210728092200-20210728122200-00374.warc.gz"}
|
https://astronomy.stackexchange.com/questions/35560/vectorial-construction-of-tidal-forces-or-why-is-it-centripetal-at-low-tide
|
# Vectorial construction of tidal forces - or why is it centripetal at low tide
I am seeking to understand why the tidal force is pushing towards the center of the earth (centripetal), at a point that is making with the center of the earth a 90º angle to the moon-earth axis.
This is very well seen in the common picture (from the Wikipedia "Tidal force" article):
I remember a vectorial construction of that from the French edition of Scientific American ("Pour la Science"), in their August 2001 issue. But I lost it, and it is now behind a pay wall:
https://www.pourlascience.fr/sd/geosciences/les-caprices-des-marees-4441.php
I used to have the physics/mathematical background to understand that, and perhaps I could dig it out of the decade-old sediments of my memory. The vectorial construction looked interesting to me as slightly easier to explain to people around than the differential equations.
• I'd recommend viewing this video on the PBS Spacetime channel on YouTube which explains the tricky idea behind tides. – StephenG Mar 22 '20 at 12:24
• Your diagram, while correct, is misleading. That's what uhoh's answer points out – Carl Witthoft Mar 24 '20 at 17:12
• misleading how? BTW, this is not my diagram, but Wikipedia's. But I think it reflects rather well the local experience. – Jean-Denis Muys Mar 25 '20 at 19:13
The short answer which may or may not be an "Aha!" answer is that what is plotted is what's left over after a much larger, uniform force is subtracted.
The uniform force is the Force from the Moon evaluated at the center of the Earth, and the arrows show the deviation of the actual force from that average.
Why do we do it that way? When looking at the tidal forces on the oceans we treat the Earth as a rigid body with spherical symmetry. With that, we can use a variation of Newton's Shell theorem to say that the extended Earth will move the same way as if it were a point mass at its center.
Now the oceans are fluid (the opposite of rigid) and each bit responds to the Moon's force locally.
That force is (skipping constants unnecessary to make the cartoon plot)
$$F = -\frac{\mathbf{\hat{r}}}{|r|^2} = -\frac{\mathbf{r}}{|r|^3}$$
where the vector $$\mathbf{r}$$ is drawn from the Moon to some point on the Earth and $$\mathbf{\hat{r}}$$ is its unit vector. If the Earth's center is at $$\mathbf{\hat{x}} R$$ ($$R$$ is the Moon-Earth distance) and you subtract $$-\mathbf{\hat{x}}/R^2$$ you'll get that image.
In the plot below I've chosen the Earth-Moon distance to be only 10 Earth radii to highlight the slight left-right asymmetry. The tidal force is stronger on the side closer to the Moon.
import numpy as np
import matplotlib.pyplot as plt
R = 10.0
r_moon = np.array([R, 0], dtype=float)[:, None]
earth = np.zeros(2)[:, None]
theta = np.linspace(0, 2*np.pi, 49)
positions = earth + np.array([f(theta) for f in (np.cos, np.sin)])
r = positions-r_moon
F = -r * ((r**2).sum(axis=0))**-1.5
r = earth-r_moon
Fmean = -r * ((r**2).sum(axis=0))**-1.5
Ftide = F - Fmean
if True:
plt.figure()
plt.subplot(2, 1, 1)
(x, y), (Fx, Fy) = positions, 50.*Ftide
plt.quiver(x, y, Fx, Fy, width=0.005)
plt.plot(x, y, '-b')
plt.xlim(-2, 2)
plt.ylim(-1.5, 1.5)
plt.gca().set_aspect('equal')
plt.subplot(4, 1, 3)
for thing in F:
plt.plot(thing)
plt.subplot(4, 1, 4)
for thing in Ftide:
plt.plot(thing)
plt.show()
• Thanks. Very useful. If I understand correctly, the r vector is from any point (relevantly on the surface of the earth) to the center of the moon (which can be considered a single point of mass). Right? So your first "x" should be "r", right? Then I am not sure what is ˆx – Jean-Denis Muys Mar 25 '20 at 19:08
• and R being the moon-earth distance, is more precisely the distance between their respective centers? Right? – Jean-Denis Muys Mar 25 '20 at 19:09
• I am not sure what is the meaning of the "^" on top of r and x. Is it just to say these are vectors? – Jean-Denis Muys Mar 25 '20 at 19:09
• @Jean-DenisMuys This is a pretty standard notation. I'll leave a comment now and modify the text when I can. Bold face font denotes vectors, so $\mathbf{F}$ is a force vector,$\mathbf{r}$, is a vector from the lunar center to some point, $\mathbf{\hat{x}}$ is a unit vector in the x direction (pointing in the Moon to Earth direction). It's pronounced "x hat". – uhoh Mar 25 '20 at 22:00
• Sorry for having been ignorant, and thank you for making me slightly less so. :-) – Jean-Denis Muys Mar 25 '20 at 22:17
|
2021-07-25 18:35:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7316724061965942, "perplexity": 955.4326794145954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151760.94/warc/CC-MAIN-20210725174608-20210725204608-00638.warc.gz"}
|
https://nrich.maths.org/public/topic.php?code=-68&cl=3&cldcmpid=1867
|
Resources tagged with: Visualising
Filter by: Content type:
Age range:
Challenge level:
There are 178 results
Broad Topics > Mathematical Thinking > Visualising
Dotty Triangles
Age 11 to 14 Challenge Level:
Imagine an infinitely large sheet of square dotty paper on which you can draw triangles of any size you wish (providing each vertex is on a dot). What areas is it/is it not possible to draw?
Age 14 to 16 Challenge Level:
In this problem we are faced with an apparently easy area problem, but it has gone horribly wrong! What happened?
Age 11 to 14 Challenge Level:
Four rods, two of length a and two of length b, are linked to form a kite. The linkage is moveable so that the angles change. What is the maximum area of the kite?
On the Edge
Age 11 to 14 Challenge Level:
If you move the tiles around, can you make squares with different coloured edges?
Partly Painted Cube
Age 14 to 16 Challenge Level:
Jo made a cube from some smaller cubes, painted some of the faces of the large cube, and then took it apart again. 45 small cubes had no paint on them at all. How many small cubes did Jo use?
Rati-o
Age 11 to 14 Challenge Level:
Points P, Q, R and S each divide the sides AB, BC, CD and DA respectively in the ratio of 2 : 1. Join the points. What is the area of the parallelogram PQRS in relation to the original rectangle?
Triangles Within Triangles
Age 14 to 16 Challenge Level:
Can you find a rule which connects consecutive triangular numbers?
The Old Goats
Age 11 to 14 Challenge Level:
A rectangular field has two posts with a ring on top of each post. There are two quarrelsome goats and plenty of ropes which you can tie to their collars. How can you secure them so they can't. . . .
Concrete Wheel
Age 11 to 14 Challenge Level:
A huge wheel is rolling past your window. What do you see?
Three Cubes
Age 14 to 16 Challenge Level:
Can you work out the dimensions of the three cubes?
Eight Hidden Squares
Age 7 to 14 Challenge Level:
On the graph there are 28 marked points. These points all mark the vertices (corners) of eight hidden squares. Can you find the eight hidden squares?
A Tilted Square
Age 14 to 16 Challenge Level:
The opposite vertices of a square have coordinates (a,b) and (c,d). What are the coordinates of the other vertices?
All in the Mind
Age 11 to 14 Challenge Level:
Imagine you are suspending a cube from one vertex and allowing it to hang freely. What shape does the surface of the water make around the cube?
Isosceles Triangles
Age 11 to 14 Challenge Level:
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
Zooming in on the Squares
Age 7 to 14
Start with a large square, join the midpoints of its sides, you'll see four right angled triangles. Remove these triangles, a second square is left. Repeat the operation. What happens?
Coloured Edges
Age 11 to 14 Challenge Level:
The whole set of tiles is used to make a square. This has a green and blue border. There are no green or blue tiles anywhere in the square except on this border. How many tiles are there in the set?
Convex Polygons
Age 11 to 14 Challenge Level:
Show that among the interior angles of a convex polygon there cannot be more than three acute angles.
Tessellating Hexagons
Age 11 to 14 Challenge Level:
Which hexagons tessellate?
Tilting Triangles
Age 14 to 16 Challenge Level:
A right-angled isosceles triangle is rotated about the centre point of a square. What can you say about the area of the part of the square covered by the triangle as it rotates?
Cutting a Cube
Age 11 to 14 Challenge Level:
A half-cube is cut into two pieces by a plane through the long diagonal and at right angles to it. Can you draw a net of these pieces? Are they identical?
Christmas Chocolates
Age 11 to 14 Challenge Level:
How could Penny, Tom and Matthew work out how many chocolates there are in different sized boxes?
AMGM
Age 14 to 16 Challenge Level:
Can you use the diagram to prove the AM-GM inequality?
Corridors
Age 14 to 16 Challenge Level:
A 10x10x10 cube is made from 27 2x2 cubes with corridors between them. Find the shortest route from one corner to the opposite corner.
The Spider and the Fly
Age 14 to 16 Challenge Level:
A spider is sitting in the middle of one of the smallest walls in a room and a fly is resting beside the window. What is the shortest distance the spider would have to crawl to catch the fly?
Triangles Within Squares
Age 14 to 16 Challenge Level:
Can you find a rule which relates triangular numbers to square numbers?
Cubes Within Cubes Revisited
Age 11 to 14 Challenge Level:
Imagine starting with one yellow cube and covering it all over with a single layer of red cubes, and then covering that cube with a layer of blue cubes. How many red and blue cubes would you need?
Something in Common
Age 14 to 16 Challenge Level:
A square of area 3 square units cannot be drawn on a 2D grid so that each of its vertices have integer coordinates, but can it be drawn on a 3D grid? Investigate squares that can be drawn.
Age 11 to 14 Challenge Level:
Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning?
Square It
Age 11 to 16 Challenge Level:
Players take it in turns to choose a dot on the grid. The winner is the first to have four dots that can be joined to form a square.
Dice, Routes and Pathways
Age 5 to 14
This article for teachers discusses examples of problems in which there is no obvious method but in which children can be encouraged to think deeply about the context and extend their ability to. . . .
All Tied Up
Age 14 to 16 Challenge Level:
A ribbon runs around a box so that it makes a complete loop with two parallel pieces of ribbon on the top. How long will the ribbon be?
Tourism
Age 11 to 14 Challenge Level:
If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable. Decide which of these diagrams are traversable.
Around and Back
Age 14 to 16 Challenge Level:
A cyclist and a runner start off simultaneously around a race track each going at a constant speed. The cyclist goes all the way around and then catches up with the runner. He then instantly turns. . . .
Painted Cube
Age 14 to 16 Challenge Level:
Imagine a large cube made from small red cubes being dropped into a pot of yellow paint. How many of the small cubes will have yellow paint on their faces?
Weighty Problem
Age 11 to 14 Challenge Level:
The diagram shows a very heavy kitchen cabinet. It cannot be lifted but it can be pivoted around a corner. The task is to move it, without sliding, in a series of turns about the corners so that it. . . .
One and Three
Age 14 to 16 Challenge Level:
Two motorboats travelling up and down a lake at constant speeds leave opposite ends A and B at the same instant, passing each other, for the first time 600 metres from A, and on their return, 400. . . .
Tied Up
Age 14 to 16 Short Challenge Level:
How much of the field can the animals graze?
Trice
Age 11 to 14 Challenge Level:
ABCDEFGH is a 3 by 3 by 3 cube. Point P is 1/3 along AB (that is AP : PB = 1 : 2), point Q is 1/3 along GH and point R is 1/3 along ED. What is the area of the triangle PQR?
Framed
Age 11 to 14 Challenge Level:
Seven small rectangular pictures have one inch wide frames. The frames are removed and the pictures are fitted together like a jigsaw to make a rectangle of length 12 inches. Find the dimensions of. . . .
Picture Story
Age 14 to 16 Challenge Level:
Can you see how this picture illustrates the formula for the sum of the first six cube numbers?
Natural Sum
Age 14 to 16 Challenge Level:
The picture illustrates the sum 1 + 2 + 3 + 4 = (4 x 5)/2. Prove the general formula for the sum of the first n natural numbers and the formula for the sum of the cubes of the first n natural. . . .
Yih or Luk Tsut K'i or Three Men's Morris
Age 11 to 18 Challenge Level:
Some puzzles requiring no knowledge of knot theory, just a careful inspection of the patterns. A glimpse of the classification of knots and a little about prime knots, crossing numbers and. . . .
Triangles Within Pentagons
Age 14 to 16 Challenge Level:
Show that all pentagonal numbers are one third of a triangular number.
Contact
Age 14 to 16 Challenge Level:
A circular plate rolls in contact with the sides of a rectangular tray. How much of its circumference comes into contact with the sides of the tray when it rolls around one circuit?
Star Gazing
Age 14 to 16 Challenge Level:
Find the ratio of the outer shaded area to the inner area for a six pointed star and an eight pointed star.
Tic Tac Toe
Age 11 to 14 Challenge Level:
In the game of Noughts and Crosses there are 8 distinct winning lines. How many distinct winning lines are there in a game played on a 3 by 3 by 3 board, with 27 cells?
Steel Cables
Age 14 to 16 Challenge Level:
Some students have been working out the number of strands needed for different sizes of cable. Can you make sense of their solutions?
Dissect
Age 11 to 14 Challenge Level:
What is the minimum number of squares a 13 by 13 square can be dissected into?
Drilling Many Cubes
Age 7 to 14 Challenge Level:
A useful visualising exercise which offers opportunities for discussion and generalising, and which could be used for thinking about the formulae needed for generating the results on a spreadsheet.
Marbles in a Box
Age 11 to 16 Challenge Level:
How many winning lines can you make in a three-dimensional version of noughts and crosses?
|
2020-04-01 02:56:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3645690083503723, "perplexity": 1177.189782359705}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505359.23/warc/CC-MAIN-20200401003422-20200401033422-00421.warc.gz"}
|
https://ohm.lumenlearning.com/multiembedq.php?id=611&theme=oea&iframe_resize_id=mom2
|
Enable text based alternatives for graph display and drawing entry
Try Another Version of This Question
Find the least common denominator of 11/60 and 8/45.
LCD =
|
2021-11-28 19:59:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18659180402755737, "perplexity": 2956.3914316510404}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00055.warc.gz"}
|
https://www.physicsforums.com/threads/find-theta-from-the-cross-product-and-dot-product-of-two-vectors.468976/
|
# Find theta from the cross product and dot product of two vectors
loganblacke
## Homework Statement
If the cross product of vector v cross vector w = 3i + j + 4k, and the dot product of vector v dot vector w = 4, and theta is the angle between vector v and vector w, find tan(theta) and theta.
## Homework Equations
vector c = |v||w| sin(theta) where vector c is the cross product of v and w.
## The Attempt at a Solution
I'm assuming you have to split the cross product back into the two original vectors and then calculate the angle but I'm not sure how to go from cross product to 2 vectors. Please help!
## Answers and Replies
Science Advisor
Homework Helper
You can't get the two vectors. And you don't have to.
|3i + j + 4k|=|v|*|w|*sin(theta). 4=|v|*|w|*cos(theta). How would you get tan(theta) from that?
loganblacke
You can't get the two vectors. And you don't have to.
|3i + j + 4k|=|v|*|w|*sin(theta). 4=|v|*|w|*cos(theta). How would you get tan(theta) from that?
I honestly have no idea.
Staff Emeritus
Science Advisor
Homework Helper
Education Advisor
Think trig identity.
Science Advisor
Homework Helper
Think trig identity.
That's coy. :) What's the definition of tan(theta)?
loganblacke
That's coy. :) What's the definition of tan(theta)?
tan theta is sin theta/cos theta.. which I think would put the vector over its magnitude and result in tan theta = unit vector..
Science Advisor
Homework Helper
tan theta is sin theta/cos theta.. which I think would put the vector over its magnitude and result in tan theta = unit vector..
??? Divide the two sides of the equations by each other. Can't you find a way to get tan(theta) on one side?
loganblacke
??? Divide the two sides of the equations by each other. Can't you find a way to get tan(theta) on one side?
I'm completely lost right now, the only thing i can work out on paper is if you isolate |v|*|w| in both equations by dividing both sides by cos theta and sin theta respectively. Then you could set the vector/sin theta = 4/cos theta.
Science Advisor
Homework Helper
I'm completely lost right now, the only thing i can work out on paper is if you isolate |v|*|w| in both equations by dividing both sides by cos theta and sin theta respectively. Then you could set the vector/sin theta = 4/cos theta.
There aren't any vectors here anymore, there's only |3i + j + 4k|. That's number, not a vector. You can compute it. Can't you get sin(theta)/cos(theta) on one side and a number on the the other?
Last edited:
loganblacke
There aren't any vectors here anymore. Everything is just numbers. Sure isolate |v|*|w| in both equations. Then set the other sides equal to each other. What's the resulting equation?
I see now that its the magnitude of vector 3i + J + 4k rather than the vector itself. So you end up with sqrt(3^2+1^2+4^2)/sin theta = 4/cos theta..
So you end up with tan theta = sqrt(26)/4.
loganblacke
then theta = arctan(sqrt(26)/4)
Thanks for the help.. again.
Staff Emeritus
Science Advisor
Homework Helper
Education Advisor
That's coy. :)
I am nothing if not coy.
|
2022-08-13 00:02:35
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822784423828125, "perplexity": 1556.2374825214463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00263.warc.gz"}
|
https://openstax.org/books/precalculus/pages/8-7-parametric-equations-graphs
|
Precalculus
# 8.7Parametric Equations: Graphs
Precalculus8.7 Parametric Equations: Graphs
### Learning Objectives
In this section you will:
• Graph plane curves described by parametric equations by plotting points.
• Graph parametric equations.
It is the bottom of the ninth inning, with two outs and two men on base. The home team is losing by two runs. The batter swings and hits the baseball at 140 feet per second and at an angle of approximately$45° 45°$to the horizontal. How far will the ball travel? Will it clear the fence for a game-winning home run? The outcome may depend partly on other factors (for example, the wind), but mathematicians can model the path of a projectile and predict approximately how far it will travel using parametric equations. In this section, we’ll discuss parametric equations and some common applications, such as projectile motion problems.
Figure 1 Parametric equations can model the path of a projectile. (credit: Paul Kreher, Flickr)
### Graphing Parametric Equations by Plotting Points
In lieu of a graphing calculator or a computer graphing program, plotting points to represent the graph of an equation is the standard method. As long as we are careful in calculating the values, point-plotting is highly dependable.
### How To
Given a pair of parametric equations, sketch a graph by plotting points.
1. Construct a table with three columns:$t,x(t),and y(t). t,x(t),and y(t).$
2. Evaluate $x x$ and $y y$ for values of $t t$ over the interval for which the functions are defined.
3. Plot the resulting pairs$( x,y ). ( x,y ).$
### Example 1
#### Sketching the Graph of a Pair of Parametric Equations by Plotting Points
Sketch the graph of the parametric equations $x(t)= t 2 +1, y(t)=2+t. x(t)= t 2 +1, y(t)=2+t.$
#### Analysis
As values for$t t$progress in a positive direction from 0 to 5, the plotted points trace out the top half of the parabola. As values of$t t$become negative, they trace out the lower half of the parabola. There are no restrictions on the domain. The arrows indicate direction according to increasing values of $t. t.$The graph does not represent a function, as it will fail the vertical line test. The graph is drawn in two parts: the positive values for $t, t,$ and the negative values for $t. t.$
Try It #1
Sketch the graph of the parametric equations$x= t , y=2t+3, 0≤t≤3. x= t , y=2t+3, 0≤t≤3.$
### Example 2
#### Sketching the Graph of Trigonometric Parametric Equations
Construct a table of values for the given parametric equations and sketch the graph:
$x=2cos t y=4sin t x=2cos t y=4sin t$
#### Analysis
We have seen that parametric equations can be graphed by plotting points. However, a graphing calculator will save some time and reveal nuances in a graph that may be too tedious to discover using only hand calculations.
Make sure to change the mode on the calculator to parametric (PAR). To confirm, the$Y= Y=$window should show
$X 1T = Y 1T = X 1T = Y 1T =$
instead of$Y 1 =. Y 1 =.$
Try It #2
Graph the parametric equations:$x=5cos t, y=3sin t. x=5cos t, y=3sin t.$
### Example 3
#### Graphing Parametric Equations and Rectangular Form Together
Graph the parametric equations$x=5cos t x=5cos t$and$y=2sin t. y=2sin t.$First, construct the graph using data points generated from the parametric form. Then graph the rectangular form of the equation. Compare the two graphs.
#### Analysis
In Figure 5, the data from the parametric equations and the rectangular equation are plotted together. The parametric equations are plotted in blue; the graph for the rectangular equation is drawn on top of the parametric in a dashed style colored red. Clearly, both forms produce the same graph.
Figure 5
### Example 4
#### Graphing Parametric Equations and Rectangular Equations on the Coordinate System
Graph the parametric equations$x=t+1 x=t+1$ and$y= t , t≥0, y= t , t≥0,$and the rectangular equivalent $y= x−1 y= x−1$on the same coordinate system.
#### Analysis
With the domain on$t t$restricted, we only plot positive values of$t. t.$The parametric data is graphed in blue and the graph of the rectangular equation is dashed in red. Once again, we see that the two forms overlap.
Try It #3
Sketch the graph of the parametric equations$x=2cos θ and y=4sin θ, x=2cos θ and y=4sin θ,$along with the rectangular equation on the same grid.
### Applications of Parametric Equations
Many of the advantages of parametric equations become obvious when applied to solving real-world problems. Although rectangular equations in x and y give an overall picture of an object's path, they do not reveal the position of an object at a specific time. Parametric equations, however, illustrate how the values of x and y change depending on t, as the location of a moving object at a particular time.
A common application of parametric equations is solving problems involving projectile motion. In this type of motion, an object is propelled forward in an upward direction forming an angle of $θ θ$ to the horizontal, with an initial speed of $v 0 , v 0 ,$and at a height $h h$ above the horizontal.
The path of an object propelled at an inclination of $θ θ$ to the horizontal, with initial speed $v 0 , v 0 ,$and at a height $h h$ above the horizontal, is given by
where$g g$accounts for the effects of gravity and $h h$ is the initial height of the object. Depending on the units involved in the problem, use$g=32 ft / s 2 g=32 ft / s 2$or$g=9.8 m / s 2 . g=9.8 m / s 2 .$The equation for$x x$gives horizontal distance, and the equation for$y y$ gives the vertical distance.
### How To
Given a projectile motion problem, use parametric equations to solve.
1. The horizontal distance is given by$x=( v 0 cos θ )t. x=( v 0 cos θ )t.$Substitute the initial speed of the object for$v 0 . v 0 .$
2. The expression$cos θ cos θ$indicates the angle at which the object is propelled. Substitute that angle in degrees for$cos θ. cos θ.$
3. The vertical distance is given by the formula$y=− 1 2 g t 2 +( v 0 sin θ )t+h. y=− 1 2 g t 2 +( v 0 sin θ )t+h.$The term$− 1 2 g t 2 − 1 2 g t 2$represents the effect of gravity. Depending on units involved, use$g=32 ft/s 2 g=32 ft/s 2$or$g=9.8 m/s 2 . g=9.8 m/s 2 .$Again, substitute the initial speed for$v 0 , v 0 ,$and the height at which the object was propelled for$h. h.$
4. Proceed by calculating each term to solve for$t. t.$
### Example 5
#### Finding the Parametric Equations to Describe the Motion of a Baseball
Solve the problem presented at the beginning of this section. Does the batter hit the game-winning home run? Assume that the ball is hit with an initial velocity of 140 feet per second at an angle of$45° 45°$to the horizontal, making contact 3 feet above the ground.
1. Find the parametric equations to model the path of the baseball.
2. Where is the ball after 2 seconds?
3. How long is the ball in the air?
4. Is it a home run?
### Media
Access the following online resource for additional instruction and practice with graphs of parametric equations.
### 8.7 Section Exercises
#### Verbal
1.
What are two methods used to graph parametric equations?
2.
What is one difference in point-plotting parametric equations compared to Cartesian equations?
3.
Why are some graphs drawn with arrows?
4.
Name a few common types of graphs of parametric equations.
5.
Why are parametric graphs important in understanding projectile motion?
#### Graphical
For the following exercises, graph each set of parametric equations by making a table of values. Include the orientation on the graph.
6.
${ x( t )=t y( t )= t 2 −1 { x( t )=t y( t )= t 2 −1$
$t t$ $x x$ $y y$ $−3 −3$ $−2 −2$ $−1 −1$ $0 0$ $1 1$ $2 2$ $3 3$
7.
${ x( t )=t−1 y( t )= t 2 { x( t )=t−1 y( t )= t 2$
$t t$ $−3 −3$ $−2 −2$ $−1 −1$ $0 0$ $1 1$ $2 2$ $x x$ $y y$
8.
${ x( t )=2+t y( t )=3−2t { x( t )=2+t y( t )=3−2t$
$t t$ $−2 −2$ $−1 −1$ $0 0$ $1 1$ $2 2$ $3 3$ $x x$ $y y$
9.
${ x( t )=−2−2t y( t )=3+t { x( t )=−2−2t y( t )=3+t$
$t t$ $−3 −3$ $−2 −2$ $−1 −1$ $0 0$ $1 1$ $x x$ $y y$
10.
${ x( t )= t 3 y( t )=t+2 { x( t )= t 3 y( t )=t+2$
$t t$ $−2 −2$ $−1 −1$ $0 0$ $1 1$ $2 2$ $x x$ $y y$
11.
${ x( t )= t 2 y( t )=t+3 { x( t )= t 2 y( t )=t+3$
$t t$ $−2 −2$ $−1 −1$ $0 0$ $1 1$ $2 2$ $x x$ $y y$
For the following exercises, sketch the curve and include the orientation.
12.
${ x(t)=t y(t)= t { x(t)=t y(t)= t$
13.
${ x(t)=− t y(t)=t { x(t)=− t y(t)=t$
14.
${ x(t)=5−| t | y(t)=t+2 { x(t)=5−| t | y(t)=t+2$
15.
${ x(t)=−t+2 y(t)=5−| t | { x(t)=−t+2 y(t)=5−| t |$
16.
${ x(t)=4sin t y(t)=2cos t { x(t)=4sin t y(t)=2cos t$
17.
${ x(t)=2sin t y(t)=4cos t { x(t)=2sin t y(t)=4cos t$
18.
${ x(t)=3 cos 2 t y(t)=−3sin t { x(t)=3 cos 2 t y(t)=−3sin t$
19.
${ x(t)=3 cos 2 t y(t)=−3 sin 2 t { x(t)=3 cos 2 t y(t)=−3 sin 2 t$
20.
${ x(t)=sec t y(t)=tan t { x(t)=sec t y(t)=tan t$
21.
${ x(t)=sec t y(t)= tan 2 t { x(t)=sec t y(t)= tan 2 t$
22.
${ x(t)= 1 e 2t y(t)= e − t { x(t)= 1 e 2t y(t)= e − t$
For the following exercises, graph the equation and include the orientation. Then, write the Cartesian equation.
23.
${ x( t )=t−1 y( t )=− t 2 { x( t )=t−1 y( t )=− t 2$
24.
${ x( t )= t 3 y( t )=t+3 { x( t )= t 3 y( t )=t+3$
25.
${ x(t)=2cos t y(t)=−sin t { x(t)=2cos t y(t)=−sin t$
26.
${ x(t)=7cos t y(t)=7sin t { x(t)=7cos t y(t)=7sin t$
27.
${ x(t)= e 2t y(t)=− e t { x(t)= e 2t y(t)=− e t$
For the following exercises, graph the equation and include the orientation.
28.
$x= t 2 , y = 3t, 0≤t≤5 x= t 2 , y = 3t, 0≤t≤5$
29.
$x=2t, y = t 2 , −5≤t≤5 x=2t, y = t 2 , −5≤t≤5$
30.
$x=t, y= 25− t 2 , 0
31.
$x(t)=−t,y(t)= t , t≥0 x(t)=−t,y(t)= t , t≥0$
32.
$x=−2cos t, y=6 sin t, 0≤t≤π x=−2cos t, y=6 sin t, 0≤t≤π$
33.
$x=−sec t, y=tan t, − π 2
For the following exercises, use the parametric equations for integers a and b:
$x(t)=acos((a+b)t) y(t)=acos((a−b)t) x(t)=acos((a+b)t) y(t)=acos((a−b)t)$
34.
Graph on the domain$[ −π,0 ], [ −π,0 ],$where$a=2 a=2$and$b=1, b=1,$and include the orientation.
35.
Graph on the domain$[ −π,0 ], [ −π,0 ],$where$a=3 a=3$and$b=2 b=2$, and include the orientation.
36.
Graph on the domain$[ −π,0 ], [ −π,0 ],$where$a=4 a=4$and$b=3 b=3$, and include the orientation.
37.
Graph on the domain$[ −π,0 ], [ −π,0 ],$where$a=5 a=5$and$b=4 b=4$, and include the orientation.
38.
If$a a$is 1 more than$b, b,$describe the effect the values of$a a$and$b b$have on the graph of the parametric equations.
39.
Describe the graph if$a=100 a=100$and$b=99. b=99.$
40.
What happens if$b b$is 1 more than$a? a?$Describe the graph.
41.
If the parametric equations$x(t)= t 2 x(t)= t 2$and$y( t )=6−3t y( t )=6−3t$have the graph of a horizontal parabola opening to the right, what would change the direction of the curve?
For the following exercises, describe the graph of the set of parametric equations.
42.
$x(t)=− t 2 x(t)=− t 2$and$y( t ) y( t )$is linear
43.
$y(t)= t 2 y(t)= t 2$and$x( t ) x( t )$is linear
44.
$y(t)=− t 2 y(t)=− t 2$and$x( t ) x( t )$is linear
45.
Write the parametric equations of a circle with center$( 0,0 ), ( 0,0 ),$radius 5, and a counterclockwise orientation.
46.
Write the parametric equations of an ellipse with center$( 0,0 ), ( 0,0 ),$major axis of length 10, minor axis of length 6, and a counterclockwise orientation.
For the following exercises, use a graphing utility to graph on the window$[ −3,3 ] [ −3,3 ]$by$[ −3,3 ] [ −3,3 ]$on the domain$[0,2π) [0,2π)$for the following values of$a a$and$b b$, and include the orientation.
${ x(t)=sin(at) y(t)=sin(bt) { x(t)=sin(at) y(t)=sin(bt)$
47.
$a=1,b=2 a=1,b=2$
48.
$a=2,b=1 a=2,b=1$
49.
$a=3,b=3 a=3,b=3$
50.
$a=5,b=5 a=5,b=5$
51.
$a=2,b=5 a=2,b=5$
52.
$a=5,b=2 a=5,b=2$
#### Technology
For the following exercises, look at the graphs that were created by parametric equations of the form${ x(t)=acos(bt) y(t)=csin(dt) . { x(t)=acos(bt) y(t)=csin(dt) .$Use the parametric mode on the graphing calculator to find the values of $a,b,c, a,b,c,$ and $d d$ to achieve each graph.
53.
54.
55.
56.
For the following exercises, use a graphing utility to graph the given parametric equations.
1. ${ x(t)=cost−1 y(t)=sint+t { x(t)=cost−1 y(t)=sint+t$
2. ${ x(t)=cost+t y(t)=sint−1 { x(t)=cost+t y(t)=sint−1$
3. ${ x( t )=t−sint y( t )=cost−1 { x( t )=t−sint y( t )=cost−1$
57.
Graph all three sets of parametric equations on the domain$[0, 2π]. [0, 2π].$
58.
Graph all three sets of parametric equations on the domain$[ 0,4π ]. [ 0,4π ].$
59.
Graph all three sets of parametric equations on the domain$[ −4π,6π ]. [ −4π,6π ].$
60.
The graph of each set of parametric equations appears to “creep” along one of the axes. What controls which axis the graph creeps along?
61.
Explain the effect on the graph of the parametric equation when we switched$sin t sin t$and$cos t cos t$.
62.
Explain the effect on the graph of the parametric equation when we changed the domain.
#### Extensions
63.
An object is thrown in the air with vertical velocity of 20 ft/s and horizontal velocity of 15 ft/s. The object’s height can be described by the equation$y( t )=−16 t 2 +20t y( t )=−16 t 2 +20t$, while the object moves horizontally with constant velocity 15 ft/s. Write parametric equations for the object’s position, and then eliminate time to write height as a function of horizontal position.
64.
A skateboarder riding on a level surface at a constant speed of 9 ft/s throws a ball in the air, the height of which can be described by the equation$y( t )=−16 t 2 +10t+5. y( t )=−16 t 2 +10t+5.$Write parametric equations for the ball’s position, and then eliminate time to write height as a function of horizontal position.
For the following exercises, use this scenario: A dart is thrown upward with an initial velocity of 65 ft/s at an angle of elevation of 52°. Consider the position of the dart at any time$t. t.$Neglect air resistance.
65.
Find parametric equations that model the problem situation.
66.
Find all possible values of$x x$that represent the situation.
67.
When will the dart hit the ground?
68.
Find the maximum height of the dart.
69.
At what time will the dart reach maximum height?
For the following exercises, look at the graphs of each of the four parametric equations. Although they look unusual and beautiful, they are so common that they have names, as indicated in each exercise. Use a graphing utility to graph each on the indicated domain.
70.
An epicycloid:${ x(t)=14cos t−cos(14t) y(t)=14sin t+sin(14t) { x(t)=14cos t−cos(14t) y(t)=14sin t+sin(14t)$on the domain$[0,2π] [0,2π]$.
71.
A hypocycloid:${ x(t)=6sin t+2sin(6t) y(t)=6cos t−2cos(6t) { x(t)=6sin t+2sin(6t) y(t)=6cos t−2cos(6t)$on the domain$[0,2π] [0,2π]$.
72.
A hypotrochoid:${ x(t)=2sin t+5cos(6t) y(t)=5cos t−2sin(6t) { x(t)=2sin t+5cos(6t) y(t)=5cos t−2sin(6t)$on the domain$[0,2π] [0,2π]$.
73.
A rose:${ x(t)=5sin(2t)sint y(t)=5sin(2t)cost { x(t)=5sin(2t)sint y(t)=5sin(2t)cost$on the domain$[0,2π] [0,2π]$.
|
2020-07-13 02:13:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 341, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6864575743675232, "perplexity": 784.8682755342799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140746.69/warc/CC-MAIN-20200713002400-20200713032400-00196.warc.gz"}
|
http://aekx.vitaform-nature.fr/monte-carlo-simulation-gbm-in-r.html
|
### Monte Carlo Simulation Gbm In R
Simulation and the Monte Carlo Method, Third Edition is an excellent text for upper-undergraduate and beginning graduate courses in stochastic simulation and Monte Carlo techniques. Use R software to program probabilistic simulations, often called Monte Carlo simulations. 067 and scale parameter 0. 0005 to generate 10,000 realizations of Sr and compute the value of the discounted payoff v)=e="T (K - S)+ Estimate the mean and variance of V. Microsoft Excel is the dominant spreadsheet analysis tool and Palisade's @RISK is the leading Monte Carlo simulation add-in for Excel. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. In Excel, you would need VBA or another plugin to run multiple iterations. Monte Carlo Methods This is a project done as a part of the course Simulation Methods. I have the correlation matrix, the covariance matrix. Uniformly scatter some points over a unit square [0,1]×[0,1], as in Figure ??. Monte Carlo Simulation Excel Add-Ins 2015. Option Pricing Using Monte Carlo Methods A Directed Research Project Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE in partial fulfillment of the requirements for the Professional Degree of Master of Science in Financial Mathematics by Junxiong Wang May 2011 Approved: Professor Marcel Blais, Advisor. Monte Carlo Simulation The Monte Carlo simulation approach is a discrete numerical approximation to the true analytic solution, in this case where the underlying prices follow GBM (see Hull for a review). Consider a geometric Brownian motion (GBM) process in which you want to incorporate alternative asset price dynamics. If the GBM stays inside the corridor [L, U] between predefined times it should return 1 otherwise 0. Monte carlo simulation. The average pay-off is then calculated by summing work from home twerk remix together the returned vector and monte carlo simulation option pricing in r dividing by the number of iterations. 4 CLT and Simple Sample Averages 20 Exercises 24 2 Monte Carlo Assessment of Moments 27 2. 50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin. Volatility and your Time Horizon. At the end, we searched for variables that gave a result of something greater than something, or less than something. The exact test finishes almost instantly because the table is small, both in terms of sample size (N=31) and in terms of dimensions (3 x 3). This is a great question on a subtle point. I wrote the follo. They range between -1 and +1, with 0 indicating the lack of a linear association. A detailed Monte Carlo modeling of nonlinear chromatography was developed by Dondi et al. Numerical demonstration based on same Geometric Brownian Motion. Several approaches have been proposed in the literature to price path-dependent derivatives of European or. GNU MCSim is a simulation package, written in C, which allows you to: design and run your own statistical or simulation models (using algebraic or differential equations), perform Monte Carlo stochastic simulations, do Bayesian inference through Markov Chain Monte Carlo simulations, formally optimize experimental designs. Simulation is used when the process consists of multiple steps. I have defined return as DRIFT + correlated ZValue * Stdev. Today we are going to talk about a more advanced topic in model evaluation. R/monte_carlo. Last week, I delved into important technical details and showed how to make self-contained MCHTest objects that don't suffer side effects from changes in the global namespace. Monte Carlo Methods and Path-Generation techniques for Pricing Multi-asset Path-dependent Options Piergiacomo Sabino Dipartimento di Matematica Universit`a degli Studi di Bari [email protected] Figure 11: Actual versus Simulated Gold spot price return series histograms using the MC simulation with historical returns approach. If the GBM stays inside the corridor [L, U] between predefined times it should return 1 otherwise 0. Monte Carlo simulation (also known as the Monte Carlo Method) allows for better decision making under uncertainty. We will consider the following problem where ψ is some function on E ∈ R n over R and X = (X 1, …,X n) is a n-dimensional vector of random variables with. Assessing Models using Monte Carlo Simulations Sangmin Oh Jessica Wachtery November 12, 2017 Abstract We establish a framework for assessing the validity of a model using Monte Carlo simulations and inferences based on sampling distributions. The periodic return (note the return is expressed in continuous compounding) is a function of. From here, you can use this for all sorts of things. Probability theory for-malizes the association of an event to its volume, or measure relative to the universe of possible outcomes, by defining the probability of this event to be the corresponding volume. Since the price is a random variable, one possible way of finding its expected value is by simulation. B-RISK is a Monte Carlo simulation software for simulating building fires. 1) Introducing Monte Carlo methods with R, Springer 2004, Christian P. The Zvalue is arrived at by multiplying NORMSINV(Rand()) values by the Cholesky decomposition matrix. Pacheco1, Marley M. In the following section we provide Monte Carlo algorithm to estimate the value V of the option for the Black-Scholes model. These lecture notes come with many examples written in the statistical programming language R. Geometric Brownian motion (GBM) is a stochastic process. This example will help build a conceptual understanding before looking at another example. is to provide a comprehensive introduction to Monte Carlo methods, with a mix of theory, algorithms (pseudo + actual), and applications. Encapsulating our simulation methodology into a common library has allowed us to minimise any additional coding and create highly optimised implementations. In Section 2, we give an overview. Following the answers in this post, I'm trying to implement something similar. Yet, it is not widely used by the Project Managers. Package ‘LSMonteCarlo’ February 19, 2015 Type Package Title American options pricing with Least Squares Monte Carlo method Version 1. For example, suppose you invest in two di erent stocks, S 1(t) and S 2(t), buying N 1 shares of the rst and N 2 of the second. 2028-2 1 REPORT ITU-R SM. For example, when we define a Bernoulli distribution for a coin flip and simulate flipping a coin by sampling from this distribution, we are performing a Monte Carlo simulation. For simulation of the paths, the simplest case is that the distribution of X(h) is known for any hin a form that allows for simulation; then one can just simulate discrete skeletons as for Brownian motion. Simplifies Monte Carlo simulation studies by automatically setting up loops to run over parameter grids and parallelising the Monte Carlo repetitions. For merton that is poisson jumps with jump size being lognormal it is fairly easy with ?rpoisson and ?rnorm. What is it, how can I get started using it? Well just follow after the jump to find out. We will use Monte Carlo simulation to understand the properties of different statistics computed from sample data. Monte Carlo Simulation of Sample Percentage with 10000 Repetitions In this book, we use Microsoft Excel to simulate chance processes. Vijay Vaidyanathan, PhD. We’ll calculate the maximum drawdown for each sequence and store it in an array called dd,. The results of these numerous scenarios can give you a "most likely" case, along with a statistical distribution to understand the risk or uncertainty involved. Hogg, Joseph W. The book also serves as a valuable reference for professionals who would like to achieve a more formal understanding of the Monte Carlo method. Modeling variations of an asset, such as an index, bond or stock, allows an investor to simulate its price and that of the instruments that are derived from it; for example, derivatives. io Find an R #' #' The Geometric Brownian Motion process to describe small movements in prices #' is given by #' Ending Prices of Monte Carlo Simulation #' #' Get the ending prices, i. This may seem like a strange way to implement Monte Carlo simulation, but think about what is going on behind the scenes every time the Worksheet recalculates: (1) 5000 sets of random inputs are generated (2) The model is evaluated for all 5000 sets. Microsoft Excel is the dominant spreadsheet analysis tool and Palisade’s @RISK is the leading Monte Carlo simulation add-in for Excel. I have always been curious about how to use the correlation coefficient in the compuations of a Monte Carlo simulation. Transcript. In my code I just called R or Python’s built in random functions, but sampling can become much more sophisticated. Monte Carlo Method (Part III – Option pricing) Now that we are familiar with both the Monte Carlo Simulation and option concept, we can move on to determining a way to apply Monte Carlo in option pricing. Monte Carlo Simulation with TensorFlow. 2005) On European and Asian option pricing in. Assessing Models using Monte Carlo Simulations Sangmin Oh Jessica Wachtery November 12, 2017 Abstract We establish a framework for assessing the validity of a model using Monte Carlo simulations and inferences based on sampling distributions. This will generate a probability distribution for the output after the simulation is ran. OK, so I'm going to try my hand at a tutorial, we're going to use R to run a Monte Carlo simulation on the expected goal rates of the shots in the Southampton V Liverpool game (23/02/2015), and calculate the win probability of an average team given those chances based on those. Following the answers in this post, I'm trying to implement something similar. Fast Monte Carlo Simulation for Pricing Covariance Swap under Correlated Stochastic Volatility Models Junmei Ma, Ping He Abstract—The modeling and pricing of covariance swap derivatives under correlated stochastic volatility models are studied. A tutorial for Generating Correlated Asset Paths in MATLAB is also available. The e-book database EBC; Audiovisual media; Research data. Secondly, Monte- Carlo simulation seems unable to factor the behavioral irrationality of market participants. Confused? Try the simple retirement calculator. Since we know how many times, let's use a for loop. If there is only one variable and this is the short-term risk-free interest rate, r, or some variable related to r, the Monte Carlo simulation procedure is similar to that just described except that the discount rate is different for each run. In addition to that, there is a brief discussion of the more advanced features of the package. Monte Carlo simulations are used to model the probability of different outcomes in a process that cannot easily be predicted due to the intervention of random variables. Last week, I delved into important technical details and showed how to make self-contained MCHTest objects that don't suffer side effects from changes in the global namespace. And simillarly for schouten's nig there is dnig. Let’s perform a Monte Carlo simulation of 1000 iterations for sequences of 100 transactions. Learn More about MDRC. It also generates LaTeX tables. for American Options via Least-Squares Monte Carlo. Introduction I introduced MCHT two weeks ago and presented it as a package for Monte Carlo and boostrap hypothesis testing. net Black Scholes FX Option Pricer using Monte Carlo Simulation in Excel VBA Function Black_Scholes (S as double, K as double, r as double, rf as double, t. This Monte Carlo simulation tool provides a means to test long term expected portfolio growth and portfolio survival based on withdrawals, e. For many simulation problems QMC sampling can achieve a rate of convergence close to O(1=n), clearly higher than the O(1= p n) rate of MC. For merton that is poisson jumps with jump size being lognormal it is fairly easy with ?rpoisson and ?rnorm. 3 LLN and Classic Simple Regression 15 1. I have the correlation matrix, the covariance matrix. Secondly, Monte- Carlo simulation seems unable to factor the behavioral irrationality of market participants. The MonteCarlo package allows to create simulation studies and to summarize their results in LaTeX tables quickly and easily. Before we begin, we want to mention that a model is at least as important as the simulation results. Or copy & paste this link into an email or IM:. Monte Carlo techniques are often the only. For instance, a list of random numbers generated independently from a normal distribution with mean 0 can simulate a white noise process. Using this frame-work, we nd that geometric brownian motion underestimates the skewness in. What is Monte Carlo simulation?. Using Control Variates in MATLAB. Monte Carlo Simulation. Monte Carlo Modeling Monte Carlo methods are based on the analogy between probability and volume. Monte Carlo simulation (also called the Monte Carlo Method or Monte Carlo sampling) is a way to account for risk in decision making and quantitative analysis. R Example 5. (The method does rely on a more limited simulation, however - of test statistics rather than data). The Zvalue is arrived at by multiplying NORMSINV(Rand()) values by the Cholesky decomposition matrix. It is named MonteCarlo and aims to make simulation studies as easy as possible - including parallelization and the generation of tables. The numbers are then added together to show a very basic monte carlo simulation. Options trade started in 1973 at the Chicago Board Options Exchange (Hull, Fundamentals of futures and options markets 2008). The Monte Carlo Method is a very general method for determining distributional properties of statistics and for obtaining confidence intervals (CIs). Consider a portfolio of five assets with the following expected returns, standard deviations, and correlation matrix based on daily asset returns (where ExpReturn and Sigmas are divided by 100 to convert percentages to returns). Hogg, Joseph W. How to perform Monte Carlo simulation for trading system: Firstly, from Settings tab, you need to set up position data source, value of positions per trial, starting capital, minimum capital, position sizing method, etc. Apart from giving general information this text also constitutes a specification for the first. Simulation is used when the process consists of multiple steps. There is a video at the end of this post which provides the Monte Carlo simulations. The BM and BSM are used to value of the derivatives using risk neutral approach, but many researches do not assume risk neutral. Monte Carlo's can be used to simulate games at a casino (Pic courtesy of Pawel Biernacki) This is the first of a three part series on learning to do Monte Carlo simulations with Python. Shiny application with a Monte Carlo simulation of a geometrical brownian motion - MarcoLeti/GBM_MonteCarlo_ShinyApp. It is used to value projects that require significant amounts of funds and which may have future financial implications on a company. Monte Carlo method has received significant consideration from the context of quantitative finance mainly due to its ease of implementation for complex problems in the field. The Monte-Carlo simulation engine will price a portfolio with one option trade. Monte Carlo Simulation กับ GBM - 2020 - Talkin go money วิธีหาค่า pi ด้วย Monte Carlo Simulation (กุมภาพันธ์ 2020). org are unblocked. Compre o livro Finance with Monte Carlo na Amazon. Free Sample,Example & Format Monte Carlo Simulation Excel Template ufehw. The periodic return (note the return is expressed in continuous compounding) is a function of. Monte Carlo simulation for Celtics winning a game: Create a Monte Carlo simulation to confirm your answer to the previous problem by estimating how frequently the Celtics win at least 1 of 4 games. Using Monte Carlo simulation, find the approximate area under the curve y=cos(x) over the interval. Geometric Brownian Motion. Monte Carlo is just a method with random simulation. The techniques demonstrated are native to Excel, no add-ins are used. Although the Monte Carlo simulation yields good results fairly easily, a. Interpretation of Monte-Carlo Simulation Results We provide two result sheets such as ‘Result Sheet’ and ‘Summary Sheet’. 130 Excel Simulations in Action: Simulations to Model Risk, Gambling, Statistics, Monte Carlo Analysis, Science, Business and Finance by Dr. 1 Simulating Brownian motion (BM) and geometric Brownian motion (GBM) For an introduction to how one can construct BM, see the Appendix at the end of Monte Carlo simulation is thus commonly used to do estimate the prices. Importance Sampling and Monte Carlo Simulations 4 2 0 2 4 6 0. Miquel (LBNL), and September 2005 by G. Simulation from the bivariate normal. This type of calculator is known as a Monte Carlo simulation, or MCS: that means it calculates many possible outcomes, to show you both your expected return and the risk that you'll do worse than that. The simplest approach is to write your own scripts that carry out the steps you need for your simulations. Numerical demonstration based on same Geometric Brownian Motion. Supported by a series of Monte Carlo (MC) simulations, Su et al. Monte carlo simulation. What Is a Monte Carlo Simulation? • Specify a population (i. An R community blog edited by RStudio. Figure 4 visualizes the process of Monte Carlo simulation, where the straight line represents the drift of the geometric Brownian motion, and the other tracks represent the simulation trails of derivative security price movement. I Two simulated GBM's (using Monte Carlowill come back to this) Stochastic processes in finance. t S Figure 4 Process of Monte Carlo simulation We can see that the Monte Carlo simulation method is very. R Programming for Simulation and Monte Carlo Methods focuses on using R software to program probabilistic simulations, often called Monte Carlo Simulations. io Find an R package R language docs Run R in your browser R Notebooks. I would like to know if there is a more efficiency way to speed up below code. Simulating Multiple Asset Paths in MATLAB. 1 Simulating Brownian motion (BM) and geometric Brownian motion (GBM) For an introduction to how one can construct BM, see the Appendix at the end of Monte Carlo simulation is thus commonly used to do estimate the prices. (a) Use the Euler method with the time step size At = 0. 19,28 concluded that a 4-μm wavelength OCT system would be able to image through the milled alumina plate to reveal the backside. In presenting the multilevel Monte Carlo method, I hope to emphasise: the simplicity of the idea its exibility that it’s not prescriptive, more an approach scope for improved performance through being creative lots of people working on a variety of applications I will focus on ideas rather than lots of numerical results. 2028-2 1 REPORT ITU-R SM. R Example 5. Monte Carlo Methods This is a project done as a part of the course Simulation Methods. I have tried to create an excel to compute VaR using Monte Carlo Simulation (Geometric Brownian Motion). I simplify much of the work created leaders in the field like Christian Robert and George Casella into easy to digest lectures with examples. Application of Monte Carlo methods in finance Fred Espen Benth Centre of Mathematics for Applications (CMA) Pricing using Monte Carlo I Simulation of expectations I Quasi-MC as variance reduction. The following is I used truncated Euler method to do CIR model simulation, which is very crude, but enough to show MC you want. You simply pass it the number of simulations you want to run, and a list describing each parameter, and it will return the Monte Carlo sample as a data frame. This Monte Carlo simulation tool provides a means to test long term expected portfolio growth and portfolio survival based on withdrawals, e. A tutorial for Generating Correlated Asset Paths in MATLAB is also available. How To Add Monte Carlo Simulation to Your Spreadsheet Models. Monte Carlo uses this association. Enter Monto Carlo Simulation. Use existing R functions and understand how to write their own R functions to perform simulated inference estimates, including likelihoods and. The Monte Carlo Simulation Technique. An Introduction to the Uses of Monte Carlo Methods in Finance Monte Carlo: Solution by Simulation The goal of this presentation is to show you when to use Monte Carlo and to provide a couple of interactive examples with visualizations. It is a technique used to. Tabular Potentials for Monte Carlo Simulation of Supertoroids with Short-Range Interactions Harold W. Download MonteCarlo-noexe. To construct these scripts you will need to understand what you are simulating, that is what is the distribution of outcomes, and what are you measuring about those outcomes. I have the correlation matrix, the covariance matrix. Now publishers - monte carlo simulation for Monte Carlo Simulation for Econometricians Foundations and Trends in Econometrics > Monte Carlo. Essentially all we need in order to carry out this simulation is the daily volatility for the asset and the daily drift. geometric Brownian motion. Accurate and simple pricing of basket options of European and American style can be a daunting task. We can play a single game of craps. Once the Monte Carlo Analysis is completed, there would be no single project completion date. Interpretation of Monte-Carlo Simulation Results We provide two result sheets such as ‘Result Sheet’ and ‘Summary Sheet’. 8 out of 5 stars 4. Uncertainty in Forecasting Models. The ESTDATA= option reads in the XCH_EST data set which contains the parameter estimates and covariance matrix. A good Monte Carlo simulation starts with a solid understanding of how the underlying process works. If the GBM stays inside the corridor [L, U] between predefined times it should return 1 otherwise 0. If you can do it on Python, so certainly you can do it on Quantopian. The Monte Carlo modeling of chromatography is a computer adaptation of a composite Poissonian. A Monte Carlo simulation is an attempt to predict the future many times over. Monte Carlo simulations are very fun to write and can be incredibly useful for solving ticky math problems. The code re-implements 2d Monte Carlo simulations originally developed in Fieremans, et al. processes involving human choice or processes for which we have incomplete information). An Introduction to the Uses of Monte Carlo Methods in Finance Monte Carlo: Solution by Simulation The goal of this presentation is to show you when to use Monte Carlo and to provide a couple of interactive examples with visualizations. Microsoft Excel makes it pretty easy for you to build a stock market Monte Carlo simulation spreadsheet. If the GBM stays inside the corridor [L, U] between predefined times it should return 1 otherwise 0. Kroese, Thomas Taimre, Zdravko I. A Monte Carlo simulation is very common used in many statistical and econometric studies by many researchers. Today we are going to talk about a more advanced topic in model evaluation. Jones, and Xiao-Li Meng. 94, I find that. Es wird dabei versucht, analytisch nicht oder nur aufwendig lösbare Probleme mit Hilfe der Wahrscheinlichkeitstheorie numerisch zu lösen. The Monte-Carlo simulation engine will price a portfolio with one option trade. A good Monte Carlo simulation starts with a solid understanding of how the underlying process works. This part contains a general presentation to the Monte Carlo and Quasi-Monte Carlo simulation methods. Monte Carlo simulations are used to model the probability of different outcomes in a process that cannot easily be predicted due to the intervention of random variables. Monte Carlo simulation (also called the Monte Carlo Method or Monte Carlo sampling) is a way to account for risk in decision making and quantitative analysis. Asian call option A variation on a European call option (that is cheaper) is to average the price of the stock over. Having a clean and grounds-up code is always beneficial as this helps tweak and reformulate the basics. They are routinely used to …. Doing Monte Carlo simulations in Minitab Statistical Software is very easy. predictNLS (Part 1, Monte Carlo simulation): confidence intervals for ‘nls’ models Those that do a lot of nonlinear fitting with the nls function may have noticed that predict. You can get […]. 2) discuss where the randomness comes from. The stock has to go above or below these strike prices but we also have to cover our option costs (green line). Monte Carlo simulation can be applied to solve a real options problem, that is, to obtain an option result. org) Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. tion, surveys database Monte Carlo and adaptive Monte Carlo. This tutorial presents MATLAB code that prices an Asian option using Monte-Carlo simulation in conjunction with the control variate variance reduction technique. These two types of methods are used to evaluate an integral as an expected value. Beketov Description The package compiles functions for calculating prices of American put op-. This process is an. I dont understand why we would need to perform monte carlo simulation to find out that in 95% of scenarios the price is larger than x. No, sorry, this spreadsheet won’t let you run a hedge fund. I have always been curious about how to use the correlation coefficient in the compuations of a Monte Carlo simulation. Standard market practice is to measure such sensitivities using a "bump and revalue" method. for replication of the Monte Carlo simulation the sample script always yields the same results. Monte Carlo simulations are used in a diverse range of applications, such as the assessment of traffic flow on highways, the development of models for the evolution of stars, and attempts to predict risk factors in the stock market. Free Sample,Example & Format Monte Carlo Simulation Excel Template ufehw. In the following there is my code for pricing an European plain vanilla call option on non dividend paying stock, under the assumption that the stock follows a GBM. How is it done? The “Monte Carlo” aspect of this overall process simply refers to what is, in. Tim ST Leung E4703 Monte Carlo Simulation Method 23 29 The LR PW Estimator of from IEOR 4703 at Columbia University. it Report 36/07 Abstract We consider the problem of pricing path-dependent options on a basket of underlying assets using simulations. In previous posts, we covered how to run a Monte Carlo simulation and how to visualize the results. My first R package has been released on CRAN recently. Monte Carlo simulation allows the analysis of complex systems that deal with uncertainty. INTRODUCTION Any construction project is expected to be completed within certain period of time. Information about the open-access article 'Monte Carlo Simulation Studies in Item Response Theory with the R Programming Language' in DOAJ. The BM and BSM are used to value of the derivatives using risk neutral approach, but many researches do not assume risk neutral. Recall that a loop is great for repeating something. Monte Carlo Simulation The Monte Carlo simulation approach is a discrete numerical approximation to the true analytic solution, in this case where the underlying prices follow GBM (see Hull for a review). I simplify much of the work created leaders in the field like Christian Robert and George Casella into easy to digest lectures with examples. Register Activate. A disadvantage of this. 2 Monte Carlo integratio 4n 3 Generation and samplin 8g methods 4 Varianc 1e reduction 3 5 Quasi-rando 2m numbers 3 6 Quasi-Monte 3 Carlo techniques 3 7 Monte Carlo fo methodr rarefieds gas dynamic 4s 2 References 46 1. Figure 4 visualizes the process of Monte Carlo simulation, where the straight line represents the drift of the geometric Brownian motion, and the other tracks represent the simulation trails of derivative security price movement. , ferromagnetism. One of the most important and challenging aspects of forecasting is the uncertainty inherent in examining the future, for which Monte Carlo simulations can be an effective solution. In the following there is my code for pricing an European plain vanilla call option on non dividend paying stock, under the assumption that the stock follows a GBM. This paper uses an improved sampling procedure for calculating the probability of failure, called separable Monte–Carlo method. Monte Carlo simulation. TOPAS is a Geant4-based Monte Carlo tool for proton therapy. org podcast, let's get in to Monte Carlo simulations. After the World War II, during the 1940s, the method was continually in use and became a. And simillarly for schouten's nig there is dnig. The variance, or error, in a Monte Carlo simulation is O(1/√N), so to increase the precision by a factor of 2, four times the number of paths must be used. For more accurate p values, on some datasets, it is good to increase the Monte Carlo simulation number, the Monte Carlo simulation gives us the null distribution. Handbook of Monte Carlo Methods. Simulations of stocks and options are often modeled using stochastic differential equations (SDEs). Using financial planning software and retirement calculators, you can leverage these powerful forecasting models in your retirement planning if you understand how to use them and interpret their results. When F and G are linear functions of the state variable (as they are in this case), the SDE is called a Geometric Brownian Motion. The BM and BSM are used to value of the derivatives using risk neutral approach, but many researches do not assume risk neutral. 4 CLT and Simple Sample Averages 20 Exercises 24 2 Monte Carlo Assessment of Moments 27 2. Jones, and Xiao-Li Meng. I am trying to implement a vanilla European option pricer with Monte Carlo using R. Due to their computationally intense nature and the need to run multiple sets of simulations with the same parameters to average the results, high throughput computing was essential to performing parameter assessment with noncontinuum simulation codes. NumPy) Monte Carlo simulation is used for option pricing and risk management problems. I am trying to simulate Geometric Brownian Motion in Python, to price a European Call Option through Monte-Carlo simulation. Using this frame-work, we nd that geometric brownian motion underestimates the skewness in. As already suggested in the introduction, Monte Carlo methods' popularity and development have very much to do with the advent of computing technology in the 1940s to which von Neumann (picture above) was a pioneer. At the end of the simulation, thousands or millions of "random trials" produce a distribution of outcomes that can be. First, I'm going to use base R's random sampling functions for the Poisson and the Negative Binomial to generate samples given the presumed parameters. The methodology is much easier and much faster to implement than Monte Carlo simulation, but we relied on numerous full Monte Carlo simulations, which we ran on Domino's platform in R, to validate our methodology. What are Monte Carlo methods?. Named after famous casino in Monaco. Stochastic Simulation APPM 7400 Lesson 7: More Monte Carlo Integration and Variance Reduction Techniques September 19, 2018 Lesson 7: More Monte Carlo Integration and Variance Reduction TechniquesStochastic Simulation September 19, 2018 1/24. Asian call option A variation on a European call option (that is cheaper) is to average the price of the stock over. Many software tools are available to assist in helping build Monte Carlo simulations, such as the TIRM pilot software tool presented in Chapter 12. In this lab, we'll learn how to simulate data with R using random number generators of different kinds of mixture variables we control. First, the integration is between 0 and infinity. Using numpy and pandas to build a model and generate multiple potential. If the GBM stays inside [80,120] between the times [1,2] and [2,3], value should be 1 otherwise 0. For a general discussion of Monte-Carlo simulation see the Monte-Carlo Methods tutorial. 0 out of 5 stars 2. Rubinstein. More Monte Carlo available on the site. You can also search for vars that give you a range. frame classes The Last One A list is a collection of arbitrary objects known as its components > li=list(num=1:5,y="color",a=T) create a list with three arguments The last class we briefly mention is the data frame. expected value). Today, we will wrap that work into a Shiny app wherein a user can build a custom portfolio, and then choose a number of simulations to run and a number of months to simulate into the future. The physicists involved in this work were big fans of gambling, so they gave the simulations the code name Monte Carlo. Monte Carlo in this simulation is actually used in quite a few places. You simply pass it the number of simulations you want to run, and a list describing each parameter, and it will return the Monte Carlo sample as a data frame. 06, sigma = 0. , testing whether the portfolio can sustain the planned withdrawals required for retirement or by an endowment fund. Definition: Monte Carlo Simulation is a mathematical technique that generates random variables for modelling risk or uncertainty of a certain system. B-RISK is a Monte Carlo simulation software for simulating building fires. Schlijper, A. Approximating the above expectation using a sample mean is referred to as Monte Carlo integration or Monte Carlo simulation. I am relatively new to Python, and I am receiving an answer that I believe to be wrong, as it is nowhere near to converging to the BS price, and the iterations seem to be negatively trending for some reason. Simulation from the bivariate normal. What are the configurations of $(\alpha,\beta)$ for which (after optimizing the effort within levels) $${\cal C}_{ost} \sim_c \varepsilon^{-2},$$ i. Compilation and visualization of mean and standard deviation of portfolio returns. Consider a stockprice S(t) with dynamics. Monte Carlo Simulations The asset price follows the geometric Brownian motion dS(t) = rS(t)dt + ˙S(t)dB(t): The risk-free interest rate r and the asset volatility ˙are known constants over the life of the option. Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0. Mckean, and Allen T. What are the configurations of $(\alpha,\beta)$ for which (after optimizing the effort within levels) $${\cal C}_{ost} \sim_c \varepsilon^{-2},$$ i. e 46 unique birthdays + 4 common birthdays). Department of Energy's Office of Scientific and Technical Information. is to provide a comprehensive introduction to Monte Carlo methods, with a mix of theory, algorithms (pseudo + actual), and applications. What Is a Monte Carlo Simulation? • Specify a population (i. Supported by a series of Monte Carlo (MC) simulations, Su et al. I'm trying to implement Monte Carlo Simulation to sample out 50 instances of iris data. Here is a pseudocode in Matlab:. Doing Monte Carlo simulations in Minitab Statistical Software is very easy. com, Andrew Swanscott interviews Kevin Davey from KJ Trading Systems who discusses why looking at your back-test historical equity curve alone might not give you a true. This post is the third in a series of posts that I'm writing about Monte Carlo (MC) simulation, especially as it applies to econometrics. Volatility and your Time Horizon. R defines the following functions: rdrr. Modify Bond. A Monte Carlo Simulation Program For Linear Regression Parameters Written In R # a monte carlo simulation for regression parameters by baris altayligil # deparment of economics/istanbul university 2010. Its core idea is to use random samples of parameters or inputs to explore the behavior of a complex process.
|
2020-04-01 20:23:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7150524854660034, "perplexity": 790.4844766373286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506121.24/warc/CC-MAIN-20200401192839-20200401222839-00333.warc.gz"}
|
http://math.stackexchange.com/questions/151870/polynomials-over-gf7?answertab=oldest
|
# Polynomials over GF(7)
The following exercise is from Golan's book on linear algebra.
Problem: Consider the algebra of polynomials over $GF(7)$, the field with 7 elements.
a) Find a nonzero polynomial such that the corresponding polynomial function is identically equal to zero.
b) Is the polynomial $6x^4+3x^3+6x^2+2x+5$ irreducible?
Work so far: The first part is easy. The polynomial $x^7-x$ works by Fermat's little theorem. The second part is trickier. If the polynomial is reducible, it facts into the product of a linear term and something else, or it factors as two quadratics. The first case is easy to exclude; simply plug all seven elements of $\mathbb{Z}_7$ into the polynomial and confirm none of them are a root. The second is harder. Of course, one could just set up the systems of equations resulting from
$$(ax^2+bx+c)(dx^2+ex+f)$$
and go through all the possible values of $a,c,d,f$ and see if the resulting values of $b$ and $e$ are permissible, and while I know that would eventually give me the answer, I have no desire to do all of those computations. Is there a slicker way?
-
Guess-and-check is really the best way to go here. But instead of picking values for the coefficients, I prefer to just find all the irreducible quadratics (checking, by exhaustion, to see which ones have roots) and then see if any of them divide the given polynomial. – Brett Frankel May 31 '12 at 2:59
Rather than find all $21$ irreducible quadratics, it should be easier to consider $x^{49}-x$ (the product of all the monic irreducible polynomials of degree $1$ and $2$) and use Euclid's algorithm to take the GCD of it with your polynomial. – Chris Eagle May 31 '12 at 3:03
Switch the signs, we are trying to see whether $x^4+4x^3+x^2+5x+2=(x^2+bx+c)(x^2+ex+f)$. Not a lot of work. – André Nicolas May 31 '12 at 3:31
@Potato: Then you can multiply the first by $4$ and the second by $2$, and get a decomposition in which the coefficient of $x^2$ in each factor is $1$. – André Nicolas May 31 '12 at 3:36
@Jyrki: Fortunately, we aren't trying to find the factorization, so that's not a problem. – Chris Eagle May 31 '12 at 10:33
Hint $\$ As suggested you could use trial-and-error, and program a computer to test the $7^4 = 2401$ cases that arise from $4$ undetermined coefficients in a factorization into two quadratics. But, as is often true, a little insight trumps brute force. By exploiting innate symmetry, we can reduce the $2401$ cases to $2$ cases. First, shifting $\rm\:x\to x\!-\!1\:$ to kill the $\rm\:x^3\:$ term yields
$$\rm\begin{eqnarray} -f(x\!-\!1) &\equiv&\rm\ \ \ x^4\ +\ 2\ x^2\ -\ 3\ x\ +\ 2\pmod 7 \\ &\equiv&\rm\ (x^2\!- a\, x + b)\ (x^2\! + a\, x + c)\\ &\equiv&\rm\ \ x^4\! + (b\!+\!c\!-\!a^2)\!\: x^2\! + a(b\!-\!c)\:\!x + bc\end{eqnarray}$$
Up to $\rm\, b,c\,$ swaps, $\rm\: bc\equiv 2\!\iff\! (b,c)\, \equiv\, \pm(2,1),\, \pm\:\!(3,3).\:$ $\rm\:b\not\equiv c\:$ else coef of $\rm\,x\,$ is $\,0\not\equiv -3$.
If $\rm\ (b,c) \equiv\ \ \: (\ 2,\ 1\ )\$ then $\rm\:-3 \equiv a(b\!-\!c)\equiv\ \: a\:\$ so $\rm\:b\!+\!c\!-\!a^2\equiv\ \ \ \, 2\!+\!1\!-\!(-3)^2\equiv\ \ 1\:\not\equiv 2$
If $\rm\ (b,c) \equiv (-2,\!-1)\:$ then $\rm\:-3 \equiv a(b\!-\!c)\equiv -a\:$ so $\rm\:b\!+\!c\!-\!a^2\equiv\, -2\!-\!1\!-\!(+3)^2\equiv -5 \equiv 2$
So $\rm\:a,b,c \equiv 3,-2,-1,\:$ is a solution, which yields the factorization $$\rm -f(x\!-\!1)\, \equiv\, x^4 + 2\,x^2 - 3\,x+2\, \equiv\, (x^2-3\,x-2)(x^2+3\,x-1)\pmod 7$$
Therefore $\rm\:f(x)\:$ is reducible since $\rm\:x\to x\!+\!1\:$ above yields a factorization of $\rm\:-f(x).\ \$ QED
Remark $\$ Alternatively, you could use the Euclidean algorithm to compute $\rm\:gcd(f(x\!+\!c),x^{24}\!-\!1)\:$ for random $\rm\:c,\:$ which, $\,$ for $\rm\:c=1\:$ quickly yields $\rm\:x^2\!+\!2\:|\:f(x\!+\!1),\:$ hence $\rm\:f(x)\:$ has the factor $\rm\:(x\!-\!1)^2\!+\!2\, =\, x^2-2\,x+3.\:$ This is how some factoring algorithms work.
-
The answer for (a) is to use Fermat's Little Theorem for the coprime case, ie we know that $$x^7 \equiv x \pmod{7}$$ so $$x^7 - x \equiv 0 \pmod{7}.$$ The noncoprime case, ie $x \equiv 0 \pmod{7}$ is clear.
For the second answer, the best way really is guess and check (there are more complicated algorithms though). It factors into $$6(x^2 + 5x + 3)(x^2 + 6x + 3).$$
-
A brute-force "guess and check" would require checking $2401$ cases! But one can prune the search space by exploiting innate symmetry - see my answer. – Bill Dubuque May 31 '12 at 18:57
I expect that a computer would make short work of it, although if one is allowed to do that then he could just call f.factor() in Sage. – Dylan Moreland May 31 '12 at 19:11
|
2015-11-27 12:02:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8593888878822327, "perplexity": 178.27745144002938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449145.52/warc/CC-MAIN-20151124205409-00074-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/3159938/intuition-of-generalized-eigenvector
|
# Intuition of generalized eigenvector.
I was trying to get an intuitive grasp about what the the generalized eigenvector intuitively is. I read this nice answer, so I understand that in the basis given by the generalized eigenvectors, a jordan block is a linear map that is the sum of a stretch by a factor $$\lambda$$ (eigenvalue associated to the block) and a "collapse", but I don't understand the conclusion on what these famous generalized eigenvectors actually are...
Thus the kernel of $$(T−λI)k$$ picks up all the Jordan blocks associated with eigenvalue $$λ$$ and, speaking somewhat loosely, each generalized eigenvector gets rescaled by $$λ$$, up to some "error" term generated by certain of the other generalized eigenvectors.
Maybe someone that actually understand the last argument can care to explain with some more detail? Thank you.
• I'm not sure where the "collapse" came from. I would talk about a generalized shear. In the case of a $2\times 2$ block, it is literally a stretched shear. Mar 24 '19 at 0:54
Don't look for anything particularly deep or fancy here.
If you have a calculation to do that involves some vectors and a linear operator $$T$$ that you apply perhaps to several of the vectors or more several times in sequence, then it can simplify the calculation if you represent the vectors in an eigenbasis -- since then we have $$T(x_1,x_2,\ldots,x_n) = (\lambda_1x_1, \lambda_2x_2, \ldots, \lambda_n x_n)$$ Each component of the vector just gets multiplied by its associated eigenvalue and the different components don't interact with each other at all.
Unfortunately, not all operators can be written in this nice form, because there may not be enough eigenvectors to combine into a basis. In that case the "next best thing" we can do is choosing a basis where the matrix of $$T$$ is in Jordan form. Then each component of $$T(x_1,x_2,\ldots,x_n)$$ is either $$\lambda_i x_i$$ or $$\lambda_i x_i + x_{i+1}$$, which is still somewhat simpler than multiplication by an arbitrary matrix.
Since this gives us some of what a basis consisting entirely of eigenvectors gives us, in terms of computational simplicity, if seems reasonable to describe them as a generalization of eigenvectors. Especially since in the case where we do have enough eigenvectors for an eigenbasis, generalized eigenvectors are the same as eigenvectors.
I see generalized eigenvectors as an attempt to patch up the discrepancy between geometric multipicity and algebraic multiplicity of eigenvalues. This discrepancy is most easily observed as a result of shear transformations (and looking at Jordan normal forms, we see that shear transformations are in some sense at the core of any such discrepancy).
For instance, take the shear transformation given by the matrix $$\begin{bmatrix}1&1\\0&1\end{bmatrix}$$ It has eigenvalue $$1$$ with algebraic multiplicity $$2$$ (the characteristic polynomial is $$(\lambda - 1)^2$$, which has a double root at $$1$$), but geometric multiplicity $$1$$ (the eigenspace has dimension $$1$$, as it just the $$x$$-axis).
However, the space of generalized eigenvectors is the entire plane, which is 2-dimensional, and more in line with the algebraic multiplicity of the eigenvalue.
Consider the matrix $$A= \begin{bmatrix}3 & 1 \\ 0 & 3 \end{bmatrix}$$. Obviously 3 is the only eigenvalue with "algebraic multiplicity" 2. It is also easy to find the eigenvectors $$\begin{bmatrix}3 & 1 \\ 0 & 3 \end{bmatrix}\begin{bmatrix}x\\ y\end{bmatrix}= \begin{bmatrix}3x+ y \\ 3y\end{bmatrix}= \begin{bmatrix}3x \\ 3y \end{bmatrix}$$ so that we have 3x+ y= 3x and 3y= 3y. The first equation gives y= 0 and the second is satisfied for any x. So any eigenvector is of the form $$\begin{bmatrix}x \\ 0 \end{bmatrix}= x\begin{bmatrix}1 \\ 0 \end{bmatrix}$$. So the subspace of all eigenvectors has dimension 1 (the geometric multiplicity is 1).
$$v= \begin{bmatrix}0 \\ 1 \end{bmatrix}$$ is NOT an eigenvector: $$(A- 3I)v= \begin{bmatrix}0 & 1 \\ 0 & 0 \end{bmatrix}\begin{bmatrix} 0 \\ 1 \end{bmatrix}= \begin{bmatrix}1 \\ 0\end{bmatrix}$$, NOT the zero vector. But it does give the previous eigenvector so that applying A- 3I again would give the 0 vector it is a "generalized eigenvector.
First, I want to introduce the Jordan–Chevalley decomposition:
In essence it formalizes what you said: For every trigonalizable map $$f \in End(V)$$ with transformation Matrix $$A$$, there exists a diagonal Matrix $$D$$ (the stretching part) and an nilpotent Matrix $$N$$ (the “collapse” part) such that, $$A = D + N$$
(which makes intuitive sense: $$D$$ will have the same Main diagonal as $$A$$, so $$A – D$$ will be an strictly triangular matrix $$\Leftrightarrow$$ nilpotent).
Now consider a Matrix $$B$$ with just one Eigenvalue $$\lambda$$:
Because $$\lambda$$ is the sole Eigenvalue of $$B$$, the diagonal Matrix $$D$$ in the Jordan–Chevalley decomposition will just have $$\lambda$$ on its diagonal, so: $$\lambda * \mathbb{I}_n = D$$.
Now we rearrange:
$$B = D + N$$ to $$B - D = N \Leftrightarrow (B - \lambda * \mathbb{I}_n) = N$$.
Because $$N$$ is nilpotent, it follows that for an sufficient $$k$$, every Vector in the generalized Eigenspace will be in the Kernel of $$(B - \lambda * \mathbb{I}_n)^k$$, because for a sufficient $$k$$, $$(B - \lambda * \mathbb{I}_n)^k$$ will be the zero map.
Important: If you have more than one Eigenvalue, you need to restrict the Matrix to the respective generalized Eigenspace to reduce it to the above situation.
In this case, $$(A - \lambda * \mathbb{I}_n)^k$$ will not map every vector in V to zero, but just Elements of the generalized Eigenspace associated to $$\lambda$$ (Because you applied the Jordan–Chevalley decomposition to the generalized Eigenspace and not to V)
This is what he meant by “$$(A - \lambda * \mathbb{I}_n)^k$$ picks up all the Jordan blocks associated with eigenvalue Lambda“.
I think the "error" term he wrote about, are the Jordanchains / the way you calculate them:
$$A * v_j = v_{j-1} + \lambda * v_j \Leftrightarrow v_j * (A - \lambda * \mathbb{I}_n) = v_{j-1}$$
So every Vector not only gets scaled, but you also add a Vector. This added Vector is the “error Term".
|
2021-10-23 07:09:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 50, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9419352412223816, "perplexity": 229.4731579935706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00342.warc.gz"}
|
https://nadv.pt/r898pd/copper-oxygen-formula-84aec2
|
Write a balanced equation for the reaction occurring between copper and oxygen. How does the empirical formula calculated from experiment 2 compare to the empirical formula calculated in experiment 1? Kozo Okada, Akio Kotani. Oxygen is a reactive non-metal that exists as a colorless gas under normal conditions. Al2O3. For example, if you see the formula Ba(NO 3) 2, you may recognize the “NO 3 ” part as the nitrate ion, NO 3 −. Upon reduction with excess aluminum metal, the copper What was the weather in Pretoria on 14 February 2013? When a copper atom loses one electron it becomes the copper ion Cu+1. Now let's look at the structure of this compound. Source(s): https://shrinks.im/baqLC. The sulfur has been oxidized (increase in oxidation state). Copper (I) Oxide can react with water as the oxygen is present in the water and make Copper (II) Hydroxide. 1.copper metal heated with oxygen gives solid copper(II) oxide. Molecular weight calculation: 63.546*2 + 15.9994 ›› Percent composition by element What is the empirical formula of the compound? Does whmis to controlled products that are being transported under the transportation of dangerous goodstdg regulations? 2Cu2O + 4H2O + O2 → 4Cu (OH)2 When the funnel is removed from the hydrogen stream, the copper was still be warm enough to be oxidized by the air again. 0 … Oxygen is an oxidizer and a reactant in combustion. Copper I Oxide: Formula, Properties & Structure | Study.com Not sure what college you want to attend yet? 1 Cu: 1 O. Aluminium is the fastest and copper is the slowest of the six. $copper + oxygen \to copper\,oxide$ Copper and oxygen are the reactants because they are on the left of the arrow. Molar mass of Cu2O = 143.0914 g/mol Convert grams Copper(I) Oxide to moles or moles Copper(I) Oxide to grams. What is the empirical formula of the oxide? This is why the Roman numeral I is used in copper(I). A Compound Formed From Copper (Cu) And Oxygen (O) Contains 11.2 % By Mass Of Oxygen. They occur naturally as minerals in crystal form. The correct name for K2S is. Oxides of lead can be reduced by this method using hydrogen, as can other metal oxides. zinc + oxygen zinc oxide. imaginable degree, area of Cupric oxide is a brown colored powder while cuprous oxide is a red coloured. Find its empirical formula. Copper is a strong antioxidant, which works in the presence of the antioxidant enzyme superoxide dismutase to safeguard the cell membranes from free radicals. 1. Sciences, Culinary Arts and Personal Oxygen is an oxidizer and a reactant in combustion. Log in here for access. Copper oxide is formed when copper reacts with oxygen. Conclusions 1. Get the unbiased info you need to find the right school. Lv 4. Copper is a transition metal, and bonds with oxygen to form copper(I) oxide. Did you know… We have over 220 college These are all elemental forms of copper. Colour depends on the relevant atom having energy levels spaced so that either that coloured light is absorbed (in which case you the the other colours, for example chlorophyl looks green because it absorbs the red light!) 1 0. These inorganic compounds occur naturally as minerals in the form of crystals. Why don't libraries smell like bookstores? All other trademarks and copyrights are the property of their respective owners. The copper we will discuss in this lesson is the type of copper that is bonded to oxygen. Goals The reaction of hydrogen gas with a copper oxide compound will be studied quantitatively. Structure, properties, spectra, suppliers and links for: Copper(II) oxide, 1317-38-0, 1344-70-3. 2 of each element to 2 of each element. Now you have 1 copper and 2 oxygen atoms going to 1 copper and one oxygen atom. The greenish and blueish colors of coppers patina come from the next set of reactions. Write down the name and chemical formula of the product of the reaction. Iron combusts in oxygen to form various iron oxides, mainly iron(III) oxide: 4 Fe (s) + 3 O 2 (g) ==> 2 Fe 2 O 3 (s) Iron in its usual bulk solid form will only burn when in pure oxygen with when a great deal of heat is supplied.This is what a cutting torch does. Now you have 1 copper and 2 oxygen atoms going to 1 copper and one oxygen atom. The available human data indicate that gastrointestinal distress occurred following ingestion of solutions that contained 5-7 mg of copper and other unknown chemicals as a single dose. OU Chemical Safety Data (No longer updated) More details Warning Alfa Aesar A14436: WARNING: Causes GI injury, skin and eye irritation Alfa Aesar 40188 Its formula … Copper oxide is the product because it is on the right of the arrow. 2. Select a subject to preview related courses: Copper(I) oxide is insoluble in water, which means water molecules can't break the ionic bond between the copper(I) ions and oxygen ions. Record and calculate the following about the copper sulfate pentahydrate a Mass of CuSO 4 *5H 2 O (g) 260g b Molecular weight of CuSO 4 *5H 2 O 260 g c Moles of CuSO 4 *5H 2 O.0192 d Moles of Cu in the copper sulfate salt:.3962 mol 2. By measuring the masses of copper and oxygen in copper oxide we will determine its empirical formula. Services. The ratio of copper to oxygen in copper(II) oxide can be calculated. The molar mass of copper is 63.55 g/mol and the molar mass of oxygen is 16.00 g/mol. Copper is a part of many coinage metals. Similar to copper, magnesium also reacts with the oxygen in the air when heated to formm magnesium oxide. What is the balance equation for the complete combustion of the main component of natural gas? Heated copper metal reacts with oxygen to form the black copper oxide. Study.com has thousands of articles about every 2. Timing 60 min + further time if a spreadsheet is used to analyse class results. Properties. Copper also forms an important series of alloys with aluminum, called aluminum bronzes. Calculate the empirical formula of this copper oxide. It has copper in its +2 oxidation state. What is the chemical equation of copper plus Oxygen, Copper (I) Oxide = Cu2O (equation: 4Cu + O2 --> 2Cu2O), Copper (II) Oxide = CuO (equation: 2Cu + O2 --> 2CuO). 2 of each element to 2 of each element. Similar to copper, magnesium also reacts with the oxygen in the air when heated to form magnesium oxide. ... Copper (II) cyanide. The ratio should be close to 1:1 as the formula of copper oxide is CuO. The Empirical Formula of a Copper Oxide Reading assignment: Chang, Chemistry 10th edition, pp. This matches what happens in the reaction. Table salt, NaCl, contains an array of sodium and chloride ions combined in a 1:1 ratio. 's' : ''}}. Copper(II) iodide contains 20.13% copper by mass. The reaction setting free the carbondioxide may be possible in acid conditions. This violates the conservation of mass, so you can do one of two things. All rights reserved. Get access risk-free for 30 days, To learn more, visit our Earning Credit Page. Create your account, Already registered? We can see from the product of this reaction the formula for copper(I) oxide is Cu2 O. When the funnel is removed from the hydrogen stream, the copper was still be warm enough to be oxidized by the air again. At read heat, copper metal and oxygen react to form Cu 2 O. Beryllium copper (2 percent Be) is an unusual copper alloy in that it can be hardened by heat treatment. Molar mass of Cu2O = 143.0914 g/mol Convert grams Copper(I) Oxide to moles or moles Copper(I) Oxide to grams. Sociology 110: Cultural Studies & Diversity in the U.S. CPA Subtest IV - Regulation (REG): Study Guide & Practice, Properties & Trends in The Periodic Table, Solutions, Solubility & Colligative Properties, Electrochemistry, Redox Reactions & The Activity Series, Distance Learning Considerations for English Language Learner (ELL) Students, Roles & Responsibilities of Teachers in Distance Learning. These data were used to derive a 1 day health advisory of 1.3 mg/l which, when spread over a day, should provide protection from these acute effects. Matthew has a Master of Arts degree in Physics Education. In this lesson, we will investigate the formula for copper(I) oxide, its properties and its structure. The attraction between these oppositely charged ions is called an ionic bond. But can anyone help me out on doing it out for me and explain how you go to your answer? The ions alternate from copper(I) to oxygen. Copper reacts with oxygen that is in the air, resulting in copper dioxide (Equation 1). When did sir Edmund barton get the title sir and how? Since each copper ion has a +1 charge and each oxygen ion has a -2 charge, we need two copper(I) ions to cancel out one oxygen ion. Think of two magnets with their north and south ends sticking together. Erika. In sulfur dioxide, the oxygen has an oxidation state of -2 and the sulfur +4. Both forms of copper oxide are used in the production of pigments, but they have a number of other, differing, uses. This means a specific ratio of each ion is required to form the neutral compound. flashcard set{{course.flashcardSetCoun > 1 ? In this lesson, we will discuss its formula, physical properties of the compound, and its structure. Copper makes two different oxides according to the valency. Finding the formula of an oxide of copper Topic Moles, stoichiometry and formulae. What is the formula of the compound formed between strontium ions and nitrogenions? It contains a copper (1+). This violates the conservation of mass, so you can do one of two things. The first step in the development of a patina is oxidation to form copper (I) oxide (Cu 2 O), which has a red or pink colour (equation 1), when copper atoms initially react with oxygen molecules in the air. Aluminum and oxygen . Since it gains two negatively charged particles it has the formula O-2. 55-58. Solved: What is the empirical formula for a compound containing 88.8% copper and 11.2% oxygen? Copper(I) oxide is a diamagnetic material. I got Cu2O (copper (I) oxide). When the copper cooking surface comes into contact with acidic food (i.e. 96 g of oxygen to form an oxide. It also has oxide ions. Copper II Oxide: Formula, Properties & Structure, Over 83,000 lessons in all major subjects, {{courseNav.course.mDynamicIntFields.lessonCount}}, Organic & Inorganic Compounds Study Guide, Biological and Biomedical 22-50/53 Alfa Aesar A14436, 12300, 40188: 22-60-61 Alfa Aesar A14436, 12300, 40188: 9 Alfa Aesar A14436: H400-H410-H302 Alfa Aesar A14436: P273-P264-P270-P301+P312-P330-P501a Alfa Aesar A14436: Safety glasses. Example calculation . ››Copper(I) Oxide molecular weight. Goals The reaction of hydrogen gas with a copper oxide compound will be studied quantitatively. These metalloproteins contain two copper atoms that reversibly bind a single oxygen molecule (O 2). They are second only to hemoglobin in frequency of use as an oxygen transport molecule. just create an account. Copper(II) oxide is an ionic compound consisting of copper and oxygen ions. The two copper oxides that can form are ionic compounds. Copper sulfate is an inorganic compound that consists of sulfur and copper. When 1.50g of copper is heated in the air, it reacts with oxygen to achieve a final mass of 1.88g. The Roman numeral I indicates we are dealing with the ion of copper that loses one electron giving it the formula Cu+1. More... Cuprous oxide is any copper oxide in which the metal is in the +1 oxidation state. 55-58. and career path that can help you find the school that's right for you. vinegar, wine), it produces a toxic verdigris, which is poisonous if ingested. 4 years ago. | {{course.flashcardSetCount}} Safety . Since there are two valence states for copper, Cu+ and Cu2+, there are two compounds formed from copper and oxygen: How much money do you start with in monopoly revolution? Oppositely charged particles attract each other. The copper(I) ions and the oxygen ions stick together forming a cubic structure. courses that prepare you to earn 3. mercury(II) nitrate solution reacts with potassium iodide solution to give a mercury (II) iodide precipitate and potassium nitrate solution. Oxygen can combine with copper can combine in different ways to form two types of compound: copper(I) oxide, which is normally a reddish powder, and copper(II) oxide, which is usually a black powder. 2Cu+O2= 2CuO Experiment 2 1. Its density is around 6 g/mL and its molar mass is 143.09 g/mole. Apparatus . These forms of copper oxide as well as the other forms are formed when oxygen combines with copper in different ways. He has taught high school chemistry and physics for 14 years. Copper II oxide is a reddish solid and Copper II oxide is a black solid. One of the first things we notice about something is its appearance. Copper(II) oxide can be reduced by hydrogen and its formula determined. The product is magnesium oxide (MgO) The picture is not the same for all reactions of metals with oxygen. In order to determine the empirical formula for copper sulfide (or for any compound, for that matter) you need to have some information about either the mass of one reactant and the mass of the product, or about the percent composition of the copper sulfide. In this lesson, we will discuss its formula, physical properties of the compound, and its structure. first two years of college and save thousands off your degree. It is used to kill algae as well as a variety of water pests such as bacteria, fungi, snails and weeds. Log in or sign up to add this lesson to a Custom Course. The Empirical Formula of a Copper Oxide Reading assignment: Chang, Chemistry 10th edition, pp. Equipment and Materials Let's now turn our focus on to the properties of copper(I) oxide. Formula and structure: Copper (II) sulfate chemical formula is CuO and its molar mass is 79.55 g mol-1.The molecule is formed by one Cu 2+ cation bond to a oxygen anion O 2-.The cystral structure is a monoclinic crystal system with a copper atom coordinated by 4 oxygen ions. Anyone can earn Copper (II) oxide is an ionic compound consisting of copper and oxygen ions. Copper(II) oxide, also known as cupric oxide, is a chemical compound. a student heats 2.005 g of a copper bromide hydrate of unknown molecular formula, completely driving off the water of hydration, and notices that the weight drops to 0.998g. How long will the footprints on the moon last? Erika. Copper(II) phosphate | Cu3(PO4)2 or Cu3O8P2 | CID 86469 - structure, chemical names, physical and chemical properties, classification, patents, literature, biological activities, safety/hazards/toxicity information, supplier lists, and more. An oxygen atom wants to gain two electrons to fill its outer electron shell. Copyright © 2021 Multiply Media, LLC. CHEMESTRY. Also note that this combination of nitrogen and oxygen has no electric charge specified, so it is not the nitrite ion. Its formula is Cu2 O. What is the Difference Between Blended Learning & Distance Learning? The electrostatic attraction between copper(I) ions and oxygen ions forms a cubic structure. A black solid, it is one of the two stable oxides of copper, the other being Cu 2 O or copper(I) oxide (cuprous oxide). By measuring the masses of copper and oxygen in copper oxide we will determine its empirical formula. Now i know that the empirical formula is the ratio of the chemical elements in a compound in reduced terms. All Rights Reserved. Since there are two valence states for copper, Cu+ and Cu2+, there are two compounds formed from copper and oxygen: Copper (I) Oxide = Cu2O (equation: 4Cu + O2 --> 2Cu2O) Copper(I) oxide has a distinctive reddish-orange or rust color and it is a granular solid. Copper is a transition metal, which are the metals that are between the alkaline earth metals and the metalloids. What did women and children do at San Jose? Copper (I) oxide is further oxidized to copper (II) oxide (CuO), which is black in color (equation 2). Natural gas (mainly methane) can also be used as a reducing agent, but the reaction is much slower. Metals in the reactivity series from aluminium to copper react with oxygen in the air to form the metal oxide. Cu + 1/2O2 ----> CuO (1 copper &1/2*2 = 1 oxygen to 1 copper and 1 oxygen) or 2Cu + O2 -----> 2CuO. Visit the Organic & Inorganic Compounds Study Guide page to learn more. The correct name forSrO is_____ strontium oxide. Heated copper metal reacts with oxygen to form the black copper oxide. You can see that now there are two copper atoms and two oxygen atoms on each side. All ionic compounds have to be electrically neutral. Wear eye protection. The wiring behind the walls in the kitchen and the rest of your residence is most likely copper. {{courseNav.course.topics.length}} chapters | Calculate the molar ratio between copper and oxygen. The oxidation states of the elements oxygen (in the gas) and copper (in the metal) are 0. Copper(I) oxide is a reddish-orange rust-colored granular solid formed between the transition metal copper and oxygen. As a mineral, it is known as tenorite. [Relative atomic mass: O, 16; Al, 27] Solution: Since 2 moles of aluminium atoms combine with 3 moles of oxygen atoms, the empirical formula of the oxide is Al2O3.2. Copper (I) oxide is further oxidized to copper (II) oxide (CuO), which is black in color (equation 2). The mass is recorded before and after the reaction. As we can see in this demo, however, when iron is subdivided finely it burns readily enough in air. Copper-related information from the oxygen (formula presented) resonant x-ray emission in low-dimensional cuprates . Predict the formula of the ionic compound that forms from aluminum and oxygen. 1 0. In chemistry experiments, this reaction can be sparked by heating copper with a burner, turning the … Copper electrical wire and copper pipes must be cleaned with acid-free cleaners before soldering takes place. 5. 4Cu(s) + O 2 (g) → 2Cu 2 O(s) Reaction of copper with water Reaction of copper with the halogens . 4Al (s) + 3O 2(g) 2Al 2 O 3 (s) Zinc reacts fairly quickly to form zinc oxide. The empirical formula for copper oxide is CuO. Copper(II)carbonate is the main part of the green patina on copper or bronze; the inverse reaction (copperoxide + carbondioxide giving coppercarbonate) is much more probable. Cu O2 Balanced Equation. Oxygen gains two electrons and ends up as an oxygen ion O-2. Seems a bit of a strange question ! study Copper(I) oxide has the following properties: To unlock this lesson you must be a Study.com Member. Effects of Oxidation on Copper One positive effect of copper oxidation includes the formation of a protective outer layer that prevents further corrosion. Copper (1+) oxidocopper. Copper also forms an important series of alloys with aluminum, called aluminum bronzes. When did organ music become associated with baseball? Metal Stud Framer: Salary & Job Description, How to Choose a School with a Glass or Metal Engraving Program, Diploma in Metal Working: Program Overview, Metal Fabrication Course and Career Information, Metal Fabricator: Job Outlook & Career Requirements, Metal Patternmaker: Job Description & Requirements, Metal Art Degree and Training Program Information, How to Become a Master Gardener: Certification & Qualifications, Online Diploma in Practical Nursing Program Information, Online International Law Degree Program Overviews, Schools for Aspiring Animal Health Technicians, Online Medical Administrative Assistant Degree Program Info, Copper I Oxide: Formula, Properties & Structure, TExES Mathematics/Physical Science/Engineering 6-12 (274): Practice & Study Guide, Ohio State Test - Physical Science: Practice & Study Guide, AEPA Physical Education (NT506): Practice & Study Guide, MEGA Earth Science: Practice & Study Guide, STAAR Science - Grade 8: Test Prep & Practice, Ohio Assessments for Educators - Health (023): Practice & Study Guide, MTTC Health (043): Practice & Study Guide, AEPA Health (NT505): Practice & Study Guide, TExES Science 4-8 (116): Practice & Study Guide, MTLE Earth & Space Science: Practice & Study Guide, How Daily & Seasonal Cycles Affect Plants, Implantation & Development of the Embryo & Placenta, Effects of Nutrition, Exercise & Environment on Health, The Impact of Viruses & Microorganisms on Homeostasis, Explaining & Analyzing Processes of Life in Biology: Practice Problems, Quiz & Worksheet - Groundwater Formation & Supply Issues, Quiz & Worksheet - Ocean Circulation Patterns, Curriculum & Instruction for Physical Education, Communication in the Physical Education Classroom, Praxis Health & Physical Education: Content Knowledge Flashcards, California Sexual Harassment Refresher Course: Supervisors, California Sexual Harassment Refresher Course: Employees. When we think of copper we might think of the Statue of Liberty because it is made of copper. The two copper … Both forms of copper oxide … Conclusions 1. Copper oxidation, on the other hand, prevents further oxygen exposure and corrosion by solidly adhering to the metal's surface. credit-by-exam regardless of age or education level. . The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Here is the balanced symbol equation: 2Cu + O2 → 2CuO. Calculate the % of copper in copper sulphate, CuSO 4; Relative atomic masses: Cu = 64, S = 32 and O = 16; relative formula mass = 64 + 32 + (4x16) = 160 ; only one copper atom of relative atomic mass 64 % Cu = 100 x 64 / 160 = 40% copper by mass in the compound. {{courseNav.course.mDynamicIntFields.lessonCount}} lessons It dissolves in acids to make copper(II) salts. Copper(II) oxide or cupric oxide is the inorganic compound with the formula CuO. This leads to the ionic bond between two copper(I) ions and one oxygen ion. You may have copper pots and pans in your kitchen cabinets. Do not breathe dust. We can go further and translate the picture equation for the reaction between magnesium and oxygen to a chemical equation: 2 Mg + O 2 → 2 MgO. In this lesson, we will concentrate on the form of copper that loses one electron. Beryllium copper (2 percent Be) is an unusual copper alloy in that it can be hardened by heat treatment. The copper oxide can then react with the hydrogen gas to form the copper metal and water. What is the simplest formula of the copper oxide. Copper(II) oxide or cupric oxide is the inorganic compound with the formula CuO. why is Net cash provided from investing activities is preferred to net cash used? 2. mixing ammonium nitrate and sodium hydroxide solutions gives aqueous sodium nitrate, ammonia gas and water. As we previously discussed, ionic compounds are formed from the attraction of oppositely-charged particles. Cu + 1/2O2 ----> CuO (1 copper &1/2*2 = 1 oxygen to 1 copper and 1 oxygen) or 2Cu + O2 -----> 2CuO. Enrolling in a course lets you earn progress by passing quizzes and exams. Helps Prevent Premature Aging. credit by exam that is accepted by over 1,500 colleges and universities. A black solid, it is one of the two stable oxides of copper, the other being Cu 2 O or copper(I) oxide (cuprous oxide). Copper (I) oxide is a reddish powder whereas Copper (II) oxide is a black powder. How does the empirical formula calculated from experiment 2 compare to the empirical formula calculated in experiment 1? This copper oxide from reaction 2 is the main culprit that will later form the colors of the patina. Create an account to start this course today. Copper deficiencies are seen in many third world countries and are reflected in a number of birth and growth defects in children of those nations. 5. The copper oxide can then react with the hydrogen gas to form the copper metal and water. The copper dioxide then reacts with more oxygen to form copper oxide (Equation 2). 1.08 g of aluminium combines chemically with 0. You can test out of the Both are insoluble in water. Diamagnetism is a property of some substances that makes them repel magnetic fields. Note that similarly, you can calculate the % of the other elements in the compound e.g. The chemical equation. $copper + oxygen \to copper\,oxide$ Copper and oxygen are the reactants because they are on the left of the arrow. Following is the chemical equation to understand the chemical reaction of copper (I) oxide and water. Aluminium reacts quickly to form a surface layer of aluminium oxide. It reacts readily with concentrated ammonia, nitric acid, hydrochloric acid and sulfuric acid. The first step in the development of a patina is oxidation to form copper (I) oxide (Cu 2 O), which has a red or pink colour (equation 1), when copper atoms initially react with oxygen molecules in the air. That means that both the copper and the oxygen have been reduced (decrease in oxidation state). Let's figure out what happens to copper, so it can bond with oxygen. If sulfur is present on the surface of the copper, then the two can react to form copper sulfide, which is black (Equation 3). Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Molecular weight calculation: 63.546*2 + 15.9994 ›› Percent composition by element Copper oxide is the product because it is on the right of the arrow. Since the mole ratio is effectively 1:1, the empirical formula is Cu1O1 or CuO 6. 2.5184g Question 7 EXPERIMENT 2: Calculate the number of moles of oxygen in the copper oxide. Write a balanced equation for the reaction occurring between copper and oxygen. Copper(I) oxide even reacts with oxygen in the air to form copper(II) oxide, CuO. Mass copper oxide = 1.76 g Mass copper = 1.43 g So mass oxygen = 0.33 g. Number moles Cu = 1.43/63.5 = 0.02251 Number moles O = 0.33/16 = 0.020625. Divide by the smallest to give the ratio approx. Second, if you recognize the formula of a polyatomic ion in a compound, the compound is ionic. Its chemical formula is CuO. Hemocyanins (also spelled haemocyanins and abbreviated Hc) are proteins that transport oxygen throughout the bodies of some invertebrate animals.
Instrumental Love Songs 2019, Sideload Manifest File In Excel, Entrecote Steak How To Cook, Why Do Rottweilers Drool So Much, Westinghouse Wgen7500 Lowe's, Kreg Dowel Jig,
|
2021-06-18 02:37:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2722438871860504, "perplexity": 3582.928809771451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634616.65/warc/CC-MAIN-20210618013013-20210618043013-00467.warc.gz"}
|
https://quantumcomputing.stackexchange.com/tags/textbook-and-exercises/hot
|
# Tag Info
6
Given $\rho$ and a fixed ensemble $\{ |\psi_i \rangle \}$ it might not be possible to write $\rho$ as $\sum_i p_i |\psi_i \rangle \langle \psi_i |$. For example, let $| + \rangle = \frac{1}{\sqrt{2}} (| 0 \rangle + | 1 \rangle )$. Then the state $|+\rangle \langle + |$ cannot be expressed as a convex combination in the ensemble $\{ | 0 \rangle, |1\rangle \... 5 Choi operator of a linear map$\mathcal{E}$is defined as $$J(\mathcal{E}) = \sum_{ij} \mathcal{E}(|i\rangle\langle j|)\otimes |i\rangle\langle j|.\tag1$$ Substituting$\mathcal{E}(\rho)=\sum_k E_k\rho E_k^\dagger$into$(1), we have \begin{align} J(\mathcal{E}) &= \sum_{ijk} \left(E_k|i\rangle\langle j| E_k^\dagger\right)\otimes |i\rangle\langle j|... 4 We know that \begin{gather} |0\rangle = \frac{|+\rangle+|-\rangle}{\sqrt{2}} \\ |1\rangle = \frac{|+\rangle-|-\rangle}{\sqrt{2}} \end{gather} $$Thus, we can rewrite the GHZ state as$$ \begin{align} |GHZ\rangle &= \frac{1}{\sqrt{2}}\left(|0\rangle|00\rangle+|1\rangle|11\rangle\right) \\ &=\frac{1}{2}\left(|+\rangle|00\rangle+|-\rangle|00\rangle+... 3 LetM\in\mathrm{Lin}(\mathcal Y\otimes\mathcal X)$be some linear operator whose input and output spaces are both$\mathcal Y\otimes\mathcal X$, for some pair of finite-dimensional Hilbert spaces$\mathcal X,\mathcal Y$. Moreover, suppose$M$is positive semidefinite:$M\ge0$. It being positive semidefinite implies it admits a decomposition of the form$M=\...
3
Consider the state $|\Psi\rangle$. This has a Schmidt decomposition $$|\Psi\rangle=U_A\otimes U_B\sum_i\lambda_i|ii\rangle.$$ Its reduced density matrix is $$\rho_A=U_A\left(\sum_i\lambda_i|i\rangle\langle i|\right)U_A^\dagger.$$ It must be that if $|\Phi\rangle$ has the same reduced density matrix, the density matrices have the same spectrum and hence $|... 2 As per N&C, fidelity is "analogous to the probability of doing the decompression correctly" (emphasis added). The goal is to do the operation correctly with 100% probability, which means the probability is 1. This is the desired limit of fidelity, so no error means the fidelity is 1. 2 There are many demos on https://pennylane.ai/qml/demonstrations.html. You could perhaps get some inspiration from there. 2 This is due to how the$\mathbf{A}$matrix was defined; from that same tutorial page we have: $$\tag{1} \mathbf{A} = \sum_{n} c_n A_n$$ where each$A_n$is unitary and$c_n$is complex (and in the original VQLS paper they further impose$\lVert {\mathbf{A}}\rVert<1$and bounded condition number) but$\mathbf{A}$is never required to be unitary. Therefore,... 1 An intuitive way to think about it is that$E[M]=E[X_1 \otimes Z_2]=E[X_1 \otimes \mathbb{1}]E[\mathbb{1} \otimes Z_2]$If we only think about$E[\mathbb{1} \otimes Z_2]$, it is just the expectation value of$Z_2$on the second qubit. Consider that our second Qubit in the entangled state$\frac{| 00\rangle + | 11\rangle}{\sqrt{2}}$is measured to be$\frac{+\...
1
Taking the last two terms of last expression you gave, we can do the following \begin{align} M \left(\frac{|00\rangle+|11\rangle}{\sqrt{2}}\right) &= X_1\otimes Z_2\left(\frac{|00\rangle+|11\rangle}{\sqrt{2}}\right) \\ &= \left(\frac{X_1|0\rangle \otimes Z_2|0\rangle+X_1|1\rangle \otimes Z_2|1\rangle}{\sqrt{2}}\right) \\ &= \left(\frac{|1\... 1 Although it is not explained up to that point in the Qiskit textbook, the quantum toss is in reality applying the Hadamard gate, denoted H. In matrix form, this operator looks like: H = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} $$Now, we express the basis states in column form as follows:$$ \begin{gather} |0\rangle = \...
1
Quantum (pure) states are, by definition, defined up to a scalar complex factors. That means that a state that we write as $|\psi\rangle$, should really be understood as the full set of vectors (an equivalence class if you will) $\{\lambda|\psi\rangle : \lambda\in\mathbb C\}$. The more formal way to put this is to say that quantum states are elements in the ...
1
When we consider an uniform (or equal, as stated in Nielsen and Chuang) superposition, that is, a state that can be written as: $$|\psi\rangle=\frac{1}{2^n}\sum_x|x\rangle,$$ it is quite common not to write the normalisation constant $\frac{1}{2^n}$. Similarly, when the amplitutes of all vectors on the superposition are equal, we omit the normalisation ...
1
First of all, if we write down $\left|\psi_1\right\rangle$, we get: $$\left|\psi_1\right\rangle=\frac{1}{\sqrt{2}^n}\sum_x|x\rangle\left[\frac{|0\rangle-|1\rangle}{\sqrt{2}}\right].$$ Applying $f$ on this state gives us: $$\left|\psi_2\right\rangle=\frac{1}{\sqrt{2}^n}\sum_x|x\rangle\left[\frac{|f(x)\rangle-|1\oplus f(x)\rangle}{\sqrt{2}}\right].$$ Note that ...
1
$P$ is acting on the space $V$, projecting onto the subspace $W$. Yes, if it only acted on the subspace $W$, it would be identity, but it is acting on a larger space. For example, think about a qubit, where $V$ is spanned by the basis states $\{|0\rangle,|1\rangle\}$. You can define a projector $P=|0\rangle\langle 0|$ which projects onto the space $W$ which, ...
1
In a similar way to how the global phase difference of a state makes no physical difference, neither does amplitude of a state. We normalise states to have unit magnitude for mathematical convenience in the same way we don't carry around an $e^{i\phi}$ factor for arbitrary $\phi$ with all our states. This is because having unit vectors means we don't need to ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2021-09-21 18:23:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996680021286011, "perplexity": 699.3166763963156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00700.warc.gz"}
|
https://www.biostars.org/p/367960/
|
Tutorial:Generating consensus sequence from bam file
0
27
Entering edit mode
2.1 years ago
One of the recurring questions on biostars is "How can I create a consensus sequence from my bam file?" A variation of these question is "How to get fasta out of bam file?".
The rational behind this question is usually, to get the sequence for a given region of interest of my sample, so that one can do a multiple alignment between different samples/species.
The answer to this question is a simple two-step workflow:
1. Call variants
2. Include the variants into the reference sequence
Do the variant calling step with your favorite program/workflow, e.g. with bcftools:
$bcftools mpileup -Ou -f ref.fa input.bam | bcftools call -Ou -mv | bcftools norm -f ref.fa Oz -o output.vcf.gz In the end, one needs a valid, sorted vcf file which is compressed with bgzip and indexed with tabix. I also recommend to normalize your vcf file (In the above command this is done by bcftools norm). If your vcf isn't already compressed you can do this with: $ bgzip -c output.vcf > output.vcf.gz
And indexing is done by:
$tabix output.vcf.gz Now we are ready to create our consensus sequence. The tool of choice is bcftools consensus. If one like to create a complete new reference genome, based on the called variants, this is done by: $ bcftools consensus -f ref.fa output.vcf.gz > out.fa
If you are interested in only a given region:
$samtools faidx ref.fa 8:11870-11890 | bcftools consensus output.vcf.gz -o out.fa You can also create consensus sequences for multiple regions, if you provide a bed file to samtools faidx: $ samtools faidx -r regions.bed ref.fa | bcftools consensus output.vcf.gz -o out.fa
Have fun :)
fin swimmer
bam fasta consensus Tutorial • 3.7k views
0
Entering edit mode
Thanks for this! I have a question about tweaking the commands to have the consensus sequence report a "wobble" base in cases of heterozygosity, for example an R would be reported where a SNP is heterozygous for G/A. Do you think this is possible to implement in your pipeline? I'm writing in the context of healthy somatic human genomes (diploids).
0
Entering edit mode
According to the manual this should be possible with the -I argument:
-I, --iupac-codes
output variants in the form of IUPAC ambiguity codes
0
Entering edit mode
This seems like a very long-winded solution and doesn't apply if you don't have a reference to hand.
However that said, if you don't then your aligned BAM was probably aligned against a de-novo assembly and somewhere there should be a consensus lurking around.
I've generally done some trivial counting on the mpileup output, line by line, to call the consensus directly without going via vcf to start with and without needing a reference file. I should probably write such a tool and add it to samtools as it's a recurring question and can be done MUCH faster than needing to use bcftools.
0
Entering edit mode
Hello jkbonfield ,
I've generally done some trivial counting on the mpileup output, line by line, to call the consensus directly without going via vcf to start with and without needing a reference file.
• How can you produce a mpileup if you don't have a reference?
• Trivial counting might work if you have perfect reads and simple regions. But finding out what is the real sequence for your sample is much more that just counting and hence you are doing variant calling.
I should probably write such a tool and add it to samtools.
I would always recommend if there is a widely used tool to use it instead of writing your own. It is very likely that one introduce much more errors by writing its own tool. If I'm right, you are one of the active contributors to samtools? So this might be an exception.
fin swimmer
0
Entering edit mode
"samtools mpileup foo.bam" will produce the columns showing what bases are aligned at each position. The reference column will be N, but that's irrelevant if you want the consensus (unless you're planning on using imputation to use reference in zero coverage regions, which can be useful in some situations).
As for not writing my own software - well that's what I do for a living (and yes much of it ends up in htslib and samtools). :-) There may well be tools out there already to generate consensus; bound to be infact, including my own hacky awk and perl one-liners, but having everything in the same package is nice for usability and discoverability.
0
Entering edit mode
Thank you for this! I am trying to get the mtGenome sequence for my samples to do a MSA, but maybe I'm not understanding what the output for this should look like. When I try to get the consensus for my chrM (mtGenome), the fasta file only contains 1 sequence. I thought it was supposed to get the mtGenome sequences of all my samples in a single fasta file, right? I don't know what I'm doing wrong :(
1
Entering edit mode
As the title suggests you get a consensus i.e. one sequence. When compared to the reference you started with this consensus will have SNP's where there are differences between your data and the reference.
You will need to generate a consensus independently for each sample if you have more than one sample.
0
Entering edit mode
Oh, since I never saw the -sample flag with the instructions I've seen around, I assumed that it was able to deal with a multi sample VCF without it and it would do for all the samples! Thanks for clarifying that.
1
Entering edit mode
It may be possible to do so but the example shown in tutorial is for a single sample VCF.
|
2021-04-13 14:02:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3703833520412445, "perplexity": 1666.6392841667819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072366.31/warc/CC-MAIN-20210413122252-20210413152252-00440.warc.gz"}
|
https://www.nature.com/articles/s41467-018-02903-y?error=cookies_not_supported&code=62355626-ef16-4f8c-b13b-60ce4e3dbfec
|
# Publisher Correction: Nanoplasmonic electron acceleration by attosecond-controlled forward rescattering in silver clusters
### Subjects
The Original Article was published on 30 October 2017
Correction to: Nature Communications https://doi.org/10.1038/s41467-017-01286-w, published online 30 October 2017
The original PDF version of this Article contained an error in Equation 1. The original HTML version of this Article contained errors in Equation 2 and Equation 4. These errors have now been corrected in both the PDF and the HTML versions of the Article.
The original PDF version of this Article contained an error in Equation 1. A dot over the first occurrence of the variable ri was missing, and incorrectly read:
$$E_{{\mathrm{sp}}} = \frac{m}{2}\left| {{\mathbf{r}}_i} \right|^2 + V\left( {{\mathbf{r}}_i\left( t \right),t} \right)$$
The correct form of Equation 1 is as follows:
$$E_{{\mathrm{sp}}} = {\frac{m}{2}}\left| {\dot{\mathbf {r}}}_{i} \right|^{2} + V\left( {\mathbf{r}}_{i}\left( t \right),t \right)$$
This has now been corrected in the PDF version of the Article. The HTML version was correct from the time of publication.
The original HTML version of this Article contained errors in Equation 2 and Equation 4.
In Equation 2, a circle over the first occurrence of the variable ri replaced the intended dot, and incorrectly read:
$$\dot E_{{\mathrm{sp}}} = q_i {{\mathop {\bf r}\limits^{\circ}}_i} (t)\cdot{\cal E}_{{\mathrm{las}}} + \frac{\partial }{{\partial t}}V\left( {{\mathbf{r}}_{i}\left( t \right),t} \right)$$
The correct form of Equation 2 is as follows:
$$\dot E_{\mathrm{sp}} = q_i{\dot{\mathbf {r}}}_{i}(t)\cdot{\cal E}_{\mathrm{las}} + \frac{\partial }{\partial t}V\left( {\mathbf{r}}_{i}(t),t \right)$$
In Equation 4, circles over the first and fifth occurrences of the variable ri replaced the intended dots, and incorrectly read:
$$\frac{\mathrm{d}}{{{\mathrm{d}}t}}E_{\mathrm{sp}} = m_{i}{{\mathop {\bf r}\limits^ \circ}_{i}} \cdot {\ddot{\mathbf{r}}}_{i} + \left[ {\nabla _{{\mathbf{r}}_{i}}V\left( {{\mathbf{r}}_{i},t} \right)} \right].{{\mathop {\bf r}\limits^ \circ}_{i}} + \frac{\partial }{{\partial t}}V\left( {{\mathbf{r}}_{i},t} \right)$$
The correct form of Equation 4 is as follows:
$$\frac{{\mathrm{d}}}{{{\mathrm{d}}t}}E_{{\mathrm{sp}}} = m_{i}{\dot{\mathbf {r}}}_{i}.{\ddot{\mathbf {r}}}_{i} + \left[ {\nabla _{{\mathbf{r}}_{i}}V\left( {{\mathbf{r}}_{i},t} \right)} \right].{\dot{\mathbf {r}}}_{i} + \frac{\partial }{{\partial t}}V\left( {{\mathbf{r}}_{i},t} \right)$$
This has now been corrected in the HTML version of the Article. The PDF version was correct from the time of publication.
## Author information
Authors
### Corresponding authors
Correspondence to Josef Tiggesbäumker or Matthias F. Kling or Thomas Fennel.
The original article can be found online at https://doi.org/10.1038/s41467-017-01286-w.
## Rights and permissions
Reprints and Permissions
Passig, J., Zherebtsov, S., Irsig, R. et al. Publisher Correction: Nanoplasmonic electron acceleration by attosecond-controlled forward rescattering in silver clusters. Nat Commun 9, 629 (2018). https://doi.org/10.1038/s41467-018-02903-y
• Published:
• ### Accurate retrieval of ionization times by means of the phase-of-the-phase spectroscopy, and its limits
• D. Würzler
• , S. Skruszewicz
• , A. M. Sayler
• , D. Zille
• , M. Möller
• , P. Wustelt
• , Y. Zhang
• , J. Tiggesbäumker
• & G. G. Paulus
Physical Review A (2020)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
|
2020-12-01 16:27:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3029552400112152, "perplexity": 3170.6913084896532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674594.59/warc/CC-MAIN-20201201135627-20201201165627-00208.warc.gz"}
|
https://www.rieti.go.jp/en/events/bbl/04102801.html
|
The Impact of the U.S. Presidential Election on United States Trade Policy
Date October 28, 2004 Peter COWHEY(Dean, Graduate School of International Relations and Pacific Studies, University of California, San Diego) TANABE Yasuo(Vice President, RIETI)
Summary
My topic today is U.S. trade policy after the presidential election. But let me start by saying that I have no idea who will be president. The big uncertainty is turnout, which is unpredictable and it could lead to a significant victory for one candidate at the last second. However, President Bush has more ways of winning the election because of the Electoral College. The second point that is that so far, there is no indication of any radical change in the control of Congress. So either one or both houses of Congress will remain under Republican control, and there is a certain element of stability to trade policy. My general conclusion is that either a Bush or Kerry administration will have room to continue to do new trade agreements and bring the Doha Round to a successful conclusion. But in general, both will find it easier to do bilateral and regional trade agreements than WTO agreements. Asia and especially China will be tough political problems. We also need to take a careful look over the long term at the new security issues and how they may affect trade.
The Uruguay Round left so many innovations in trade that just implementing the Uruguay Round and figuring out the implications of the agreements on intellectual property and investment-related measures on trade and services as new areas for the World Trade Organization (WTO) constitute a continual agenda for trade policy. Both President Bush and Senator Kerry will face the problem of building the same excitement about Doha that existed about the Uruguay Round. The Doha Round has made substantial progress, and the issue of trade facilitation is attractive to U.S. business interests, but politically it is not an exciting negotiation. Trade policy in this election has been low-profile because Senator Kerry does not want to get caught in a discussion of free trade versus protection because of his labor union support; and outsourcing of jobs is a troubling issue for the blue-collar support of President Bush. Both candidates avoid trade issues in the election, reflecting the general politics in the U.S. that trade is O.K. as a specialized issue, carefully watched over and interest-balanced, but it is not a popular issue.
During my three years in the Clinton administration as a U.S. negotiator of the WTO agreement on basic telecommunication services, I learned that trade agreements are political agreements. They are political commitments at the highest level to try to act in good faith in regards to these very legal sounding obligations. You can use WTO enforcement actions to retaliate, but these are crude tools for keeping a trading relationship going. So the real purpose of binding trade agreements is to keep a very high level of political attention and commitment to the trading relationship.
In order to have the president paying attention, you have to have new initiatives that justify committing the president's time. If trade is to maintain the type of political commitment in order to have a free trade policy, the trade representative must have a continuous series of new agreements, which is why we will see a high level of bilateral activity no matter which party wins the presidency.
As my colleague, Professor Richard Feinberg, has noted, there are five big arguments that make bilateral and regional free trade agreements easier for administrations to support than WTO agreements. First, they help open particular foreign markets for U.S. companies. Most analyses show that concessions made by the trading partners are proportionally larger than those made by the United States, which is good politics for any White House. Also, there is the argument of defensive bilateral liberalization - for the U.S. to keep pace with other major trading powers. A third argument is that the small agreements advance the bigger global agenda of the U.S. For example, in the early days of the Uruguay Round, U.S. Trade Representative Bill Brock engaged in bilateral trade agreements as a way to push the Uruguay Round agenda, which was a good negotiating technique for the United States.
Fourth is the argument that bilateral and regional trade agreements are strategic economically and in the broader political sense. They allow for more secure, long-term diplomatic partnerships with key countries. It is important to note that most bilateral free trade agreements are initiated by the other country and by countries that are considered to be close political allies of the U.S. Finally, the support of domestic market reforms around the world is an important rationale for these bilateral agreements. Together, these five explanations of bilateral free trade agreements make for a powerful package in Washington and they are the reason why such agreements are the easiest route for any U.S. president.
Another point about the politics of bilateral trade agreements is that the other countries, usually ones with good political relations to the U.S., ask for them. This allows the White House to do something important politically: it turns to the trade advisory groups that advise the U.S. trade representative and asks if they would like this free trade agreement and what the priorities should be for U.S. negotiators. This allows the White House to get a political reading in advance, and reduce risk. The group then lays down the terms and conditions that would make it enthusiastic and then the trade representative can negotiate these terms. Politically, it is the safest route and this is the reason why bilaterals are so popular. In contrast, a WTO negotiation concerns a huge number of countries and takes years to figure out, which can be a risky situation.
The other important point is that on many of the issues that the American business or labor community cares about most, it is easier to make progress on the bilateral and regional negotiations. The agenda for Doha has essentially dropped major discussions on investment, competition policy, and transparency from the final negotiating agenda. In contrast, at the bilateral level, you can do a lot on these issues. Let me admit to the dirty secret that some of these bilateral agreements may not make for good global policy. But after we admit that, nonetheless, they allow for more innovation than at the global level on many issues that are central to U.S. commercial or labor interests.
I would like to turn to some of the economic factors of trade policy, especially the role of foreign investment, China, and the current account deficit as three economic factors that are going to shape the choices of either candidate. The U.S. stock of foreign investment is very high: over U.S.$1 trillion. This investment is central to explaining American trade policy and market patterns of trade. Over half of the trade involving the United States now goes through internal channels of multinational corporations, not between independent exporters and importers. To make an account of U.S. trade balances is to look at the flows of economic activities inside these corporations. So U.S. trade policy is limited by the percentage of U.S. trade that is just an internal production decision inside a multinational corporation. Also, years of large U.S. current account deficits have led to huge overseas holdings of U.S. dollars. So the U.S. freedom to manage its trade policy is limited by those holdings, which limit U.S. freedom to manipulate the exchange rate. The consequences of changing the dollar's exchange rate are very complex. A rapid decline in the dollar's value may lead to a sell-off of Treasury holdings in Asian financial markets, causing a problem for the Treasury's financing of the U.S. deficit. Any adjustment in the U.S. dollar capable of affecting the trade deficit will require a negotiated understanding among the major economic powers. Also, we have the fact that the euro is a substitute, increasingly, for international transactions. And there is the possibility of a virtual Asian currency. By a virtual currency, I mean financial markets may invent a currency from a bundle of key currencies. So the U.S. constraints will grow larger over time and that means the U.S. has limited options on free trade, which makes it hard for any president to contemplate a significant retreat from free trade policies. But the U.S. trade deficit is not politically neutral in its impact. The U.S. global account deficit in 2003 was about 5% of GDP - larger than the deficit run in the 19th century. This is difficult to sustain over the long term, and there needs to be adjustments in the size of the deficit. Looking at the composition of the deficit, the U.S. accounts with the EU are in deficit by U.S.$95 billion; with the North American Free Trade Agreement (NAFTA) partners by U.S. $95 billion; and in Asia, by U.S.$125 billion with China and U.S. \$67 billion with Japan. So the political impact in Washington of those deficits differs, depending how people see those countries politically in the United States. In terms of the EU, there will be some issue-specific fights over some prominent industries such as Boeing and Airbus, and steel; but Europe is not seen as a strategic economic threat.
The story with NAFTA is a little different. These deficits together are quite large, but Mexico and Canada are so much a part of the U.S. production base that it is impossible to dissolve those networks. The trade deficit is there because of U.S. or foreign corporations sourcing their U.S. supply bases out of Canada and Mexico. No U.S. company will support shutting down those open borders. The political issues that arise tend to be about very specific domestic political issues in the U.S. Mexico is particularly tied up with the labor unions because of the effect of Mexican trade and migration on wages in border areas. They are also tied to environmental issues like timber and fishing. These issues are classic domestic political issues in the United States that take on a trade dimension because Canada and Mexico are so close. The biggest structural issues are those involving Canadian subsidies, especially in raw materials and timber.
In Asia, trade tensions have existed between the U.S. and Japan for years. Today, the political relationship is much stronger than in the 1980s and early 1990s and the trade issue is less politically sensitive. After a decade of slow economic growth in Japan, there is not the same political intensity in the U.S. about the trade deficit with Japan, and the fear that Japan will surpass the U.S. in the world economy is no longer a central issue. While there are tensions on particular issues like the automobile trade, Japan's relationship is much better than in the past. But there are memories, and some in Washington will try to play off those memories in the future.
But Japan is, in a sense, hidden behind China now, as China accounts for the larger trade deficit and it is the latest fear for those worried about the long-term economic leadership of the U.S. The trade deficit with China, partly created by China's imperfect trade policies, will be the center of political attention in Washington, and trade relations with China will be a hot button for either candidate. To some extent, it has been buried thus far because the Clinton administration agreed to China's joining the WTO. And security arrangements with China have been stressed more than trade by the Bush administration. But in the long term, our current trade pattern with China is not politically viable and either candidate will have to have more aggressive trade policies towards China in the future while maintaining a basic free trade policy.
Both Bush and Kerry fundamentally support free trade, but are capable of compromising on that if it gets politically tough. But at core, they support a free trade policy. President Bush has already shown and Senator Kerry will show that he will support reform structurally through the WTO on issues like export subsidies and credits, though it is unlikely that either one will give up the ability of the U.S. to subsidize exports entirely. For the reasons I have suggested, they will tend to emphasize bilateral and regional negotiations, but both would support a conclusion to the Doha Round. In truth, a Kerry administration will not be that different from a second Bush administration in regards to issues of labor, the environment and outsourcing.
There are some possibilities for significant differences on issues like structural adjustment, currency realignment and other basic economic issues that affect trade; and areas like energy policy and high technology. First of all, we know who the leaders of Kerry's macroeconomic team will be but we do not know who the U.S. trade representative will be. So there is no trade policy team waiting in the wings in the same sense that there is a macroeconomic policy team waiting in the wings. Kerry will try to politically set a profile that emphasizes some distinctions about targeted trade adjustment agreements. In highly unionized areas like the automobile industry, a Kerry administration will try for a firmer trade negotiating stance. But the problems of American industry on trade are not going to be fixed by any minor adjustment, so there is a limit to what a Kerry administration could do. A Kerry administration will try to be a champion in U.S. high-technology disputes and bring more enforcement actions to the WTO than the Bush administration has. Also, a Kerry administration will experiment with "buy America" programs for government procurement. But they will not let themselves get into a major WTO fight about this.
More structural policies by a Kerry administration will be in areas such as exchange rate policy. Both President Bush and Senator Kerry's economic teams are going to face the need for a major change in the value of the U.S. currency against other currencies. But changing your currency's value is as much a part of domestic macroeconomic policy as it is a foreign exchange rate policy. Therefore, how they go about trying to deal with currency depreciation will be different because their macroeconomic policies will be different. This is a big unknown.
While the U.S. labor market is flexible, there has been significant political pain in the last four years due to the adjustments in the labor market caused by international competition. A Kerry administration may try a dramatic new initiative on worker assistance. For example, instead of giving workers job training and temporary wages, he could do something like buying out their jobs. A buyout is cheaper than protecting the job and it provides an independent economic base for the worker in the future. Senator Kerry is going to need a big idea for helping labor - one that is not protectionist. Senator Kerry will also try to justify a major health care initiative as a type of economic trade relief for unionized American companies. Nationalized health care programs will be a way of helping companies with their high costs of health care. That linkage between health care and international competition will be more explicit in a Kerry administration.
A Kerry administration may have a more dramatic policy for reducing oil imports through conservation and other methods. Politically, it would be good as a contrast to the Bush administration and would be a way to tackle the trade issue, as oil imports are roughly 20% of the U.S. trade deficit. Regarding outsourcing, you heard a lot in the early days of the campaign but not so much now. As we know from economics, it is the slow economic growth and rapid rise in productivity in the U.S. that are the main sources of job losses, not outsourcing. The Kerry proposals to lower the U.S. corporate tax rate and end tax deferment of overseas profits are pretty modest ways of addressing outsourcing.
For labor rights in general, the Kerry team announced it will support the American Federation of Labor-Congress of Industrial Organizations (AFL-CIO) petition on unfair labor practices by China. The theory of the labor petition is that the lack of labor rights in China reduces wages and lowers Chinese costs, and therefore hurts U.S. jobs. The economics of this are not so clear and if it were brought to the WTO, it is hard to see that it would be a successful case. But you may see a Kerry administration try to show that they are honoring their commitment.
In general, the Bush and Kerry policies on labor are very similar in some ways. Both support national enforcement of international labor agreements and both will go toward more side agreements on the environment. A Kerry administration may be more aggressive on particular issues. Both would support the Doha agreement and, in the end, will be able to sign onto a reasonable WTO proposal. With respect to this, the U.S. offer made in July may be strengthened at the last minute of the negotiations, but either president will have restraints on what they can offer. Therefore, there will not be room for dramatic changes.
Lastly, regardless of who is president, we will face the implications of security policy for trade policy. The growth of information security infrastructures (ISI) could create a major new indirect trade barrier. This will be one of the real long-term security and trade issues that we will face. In closing, let me flag one other industry-specific issue. Biotechnology is going to be one of the driving forces of the world economy for the next 50 years, but the spread of biotechnology know-how is going to raise security risks in the industry. Today, the ability to do dangerous biotechnology experiments is limited to a small number of laboratories, but we are going to face the question of ground rules for the biotechnology industry 20 years from now. These issues are not partisan; they are the sorts of cutting-edge issues faced in the world trade agenda.
Q: What is your understanding as to whether Japan has become a more open country, regardless of American perceptions?
A: In the late 1980s and early 1990s, Japanese statistics, compared to those of other major industrial countries, did not look good on openness. Today, the stock of foreign investment in Japan is still relatively low and because of this, consumption of foreign goods and imports in Japan is low because foreign investment drives trade patterns to such a large extent. Also, slow growth in Japan has meant that import growth markets are constrained by the general slowness in macroeconomic growth. So the Japanese economy requires further internationalization for it to be more typical of an industrial country. That said, I am struck by how much the discussion by American and European companies about the Japanese markets has changed. Somewhere in the 1990s, there came to be a belief that it was possible to have a winning strategy in Japan. In 1994 it was not clear to me that people really believed that. So this is a big transformation.
Q: How do you evaluate the possibility of a Japan-U.S. FTA as well as U.S.-China FTA?
A: I think a U.S.-Japan FTA, unless Japan was willing to really make major concessions, is probably not a political ground you want to explore. If Japan really had a set of strategic economic objectives for additional access to the U.S. that they did not feel they had, and Japan was really willing to pay for, it might be politically viable. But it would take a very aggressive Japanese offer and I am not sure that is viable for you. I do not see a U.S.-China FTA in the near term. China is such a large political factor that I could imagine a U.S. trade representative sitting down with his or her team of advisors and spending a lot of time talking about it. Probably, at the end, they would say it is just too risky to attempt the agreement. But I could imagine a trade representative at least having that conversation and giving it serious thought.
Q: Whoever is chosen in the election will face pressure for currency realignment. Could you elaborate on that?
A: Economists have said that the U.S. current account deficit may not be easily sustainable. You can try to get additional markets for U.S. exporters or other adjustments in trade policy, but they are not significant. The major policy tool that you have is the currency exchange rate mechanism. The considerations you need to have are the degree of downward movement of the dollar and the implications of the adjustment on the willingness of others to hold U.S. bonds and securities, which affects macroeconomic policy. So the difference between Kerry and Bush in handling exchange rates is tied to macroeconomic policy and different approaches to handling government budget deficits. Whether Kerry will be able to reduce the U.S. government deficit will depend on whether he has control of Congress.
Q: You mentioned reducing the import of oil to reduce the size of the U.S. current account deficit, but when I think about the lifestyle in the United States, I wonder how Americans can conserve energy? In spite of the fact that the oil price is going up, they cannot reduce gasoline consumption because they have to use automobiles to get to work or in their daily lives. So a very important point for energy policy is whether the U.S. government will change its position regarding the import of nuclear power stations. That is the most important point to reduce the import of oil in the long term. How do you feel the policy will develop if Kerry becomes president?
A: It will not be easy for Kerry to greatly expand the nuclear power option. It is very politically sensitive. But there are options for the Kerry administration in regards to energy efficiency and conservation. Japan did this after the oil shock of 1973 by driving up the price of energy and through aggressive pro-conservation regulations and policies. Politically it is attractive for Democrats to move in that direction because they are seen as "greener" than Republicans and there are a number of technologies now available to do that. For a Democrat that is an attractive package and so I think that structural change in the U.S. economy is possible if it is politically attractive. It would not solve the total problem but as a matter of economic, military and political prudence, in order to do something about the trade deficit that is not protectionist, this would be a good move.
Q: I hope there are no attempts to make very radical, artificial adjustments in the dollar as a method of redressing the imbalance in the current account.
A: I would add two points. One, the Bush administration is working on the exchange rate with China through the G-7 mechanism - seeking an orderly change, which is an indication of how any administration would try to do it. Second, former Secretary of the Treasury Rubin, a powerful advisor to Kerry, believes in a strong U.S. dollar as it is an important signal to the financial markets and a type of self-discipline for the U.S. more than anything else. So the influence of someone like Rubin may be a factor.
Q: You talked about trading partners. Would you elaborate how you link the view on Japan and the view on Japan behind China?
A: Last year I testified to the U.S.-China Commission of the U.S. Congress that I did not believe China was a fundamental threat economically to the United States and that a good deal of the U.S. deficit with China was due to the restructuring of production chains in the Pacific, where China becomes the intermediate stage of production before goods are shipped to the U.S. So although the trade deficit with China is huge, we in the U.S. need to appreciate that China has assumed a different role in the global production system and that the system benefits the U.S. overall and is essential to American companies' economic strategies. The U.S. reasserted its competitiveness in the 1990s in part by the successful creation of very effective global production chains. But it is a politically more explosive relationship than the relationship with Japan. The Bush administration in the last two years has been careful in its trade policy with China because of the larger security questions. Given the realities of trade politics over time, however, both Bush and Kerry are going to have to be a little firmer and have a few more disputes with China in order to show they are upholding U.S. interests.
Q: That means if you come down on China too harshly, there will be some repercussions on security issues?
A: The U.S. and Japan went through all their economic disputes, but kept their security ties. In the long-term China faces a choice: If they are to truly be one of the world's great trading and economic powers, the other major economic and security powers have to be convinced that China is a reasonable place for their companies to do business fairly. That is ultimately why an economic relationship like trade is a political relationship. China's great challenge is to increase the world's confidence in it as a fair trading partner. That is something the U.S. will have to be aggressive in dealing with, but within the bounds of having a successful security relationship.
Q: Could you comment on the actual benefit of bilateral trade agreements to consumers and on labor and environmental standards and their potential to undermine successful Doha Round negotiations?
A: I take your point about the limited economic significance of many of the bilaterals. It is one of the reasons why they are politically safe, but I want to stress that they are important for other reasons, including the fact that they give the president a reason to be committed and active on preaching free trade policy - and that is vital.
Labor and environment could be a problem for Kerry. But there are no signs that the Kerry team has any policy that will be a show-stopper for the Doha Round. What they want to come out of the Doha Round with is room to maneuver on a bilateral basis and to be no worse off on a multilateral basis. The question is whether a Kerry administration could convince labor and others in the Democratic coalition that they are really making progress on facilitating worker protection and environmental protection in other countries. A Kerry administration would spend more money on foreign aid to enhance environmental and labor policy programs in other countries that are trading partners. That is the most viable approach.
*This summary was compiled by RIETI Editorial staff.
|
2022-11-26 22:34:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2356506735086441, "perplexity": 1762.8076822933826}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446709929.63/warc/CC-MAIN-20221126212945-20221127002945-00647.warc.gz"}
|
https://www.transtutors.com/questions/1-the-results-of-theorem-10-17-1-suggest-the-following-algorithm-for-determining-the-5849053.htm
|
# 1. The results of Theorem 10.17.1 suggest the following algorithm for determining the optimal...
1. The results of Theorem 10.17.1 suggest the following algorithm for determining the optimal sustainable yield.
(i) For each value of i = 1, 2, . . . , n, set hi = h and hk = 0 for k _= i and calculate the respective yields. These n calculations give the one-age-class results. Of course, any calculation leading to a value of h not between 0 and 1 is rejected.
(ii) For each value of i = 1, 2, . . . , n − 1 and j = i + 1, i +
2, . . . , n, set hi = h, hj = 1, and hk = 0 for k _= i, j and calculate the respective yields. These 1 2 n(n − 1) calculations give the two-age-class results. Of course, any calculation leading to a value of h not between 0 and 1 is again rejected.
(iii) Of the yields calculated in parts (i) and (ii), the largest is the optimal sustainable yield. Note that there will be at most n + 1 2 n(n − 1) = 1 2 n(n + 1) calculations in all. Once again, some of these may lead to a value of h not between 0 and 1 andmust therefore be rejected. If we use this algorithm for the sheep example in the text, there will be at most 1 2 (12)(12 + 1) = 78 calculations to consider. Use a computer to do the two-age-class calculations for h1 = h, hj = 1, and hk = 0 for k _= 1 or j for j = 2, 3, . . . ,
12. Construct a summary table consisting of the values of h1 and the percentage yields using j = 2, 3, . . . , 12, which will show that the largest of these yields occurs when j = 9.
|
2021-03-04 13:11:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8234450221061707, "perplexity": 328.5989794824712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369054.89/warc/CC-MAIN-20210304113205-20210304143205-00144.warc.gz"}
|
https://www.hackmath.net/en/math-problem/3839
|
# Difference of two number
The difference of two numbers is 20. They are positive integers greater than zero. The first number raised to one-half equals the second number. Determine the two numbers.
Correct result:
a = 25
b = 5
#### Solution:
$a-b=20 \ \\ b^2=a=25=0 \ \\ b^2-b-20=0 \ \\ \ \\ b^2-b-20=0 \ \\ b^2 -b -20=0 \ \\ \ \\ p=1; q=-1; r=-20 \ \\ D=q^2 - 4pr=1^2 - 4\cdot 1 \cdot (-20)=81 \ \\ D>0 \ \\ \ \\ b_{1,2}=\dfrac{ -q \pm \sqrt{ D } }{ 2p }=\dfrac{ 1 \pm \sqrt{ 81 } }{ 2 } \ \\ b_{1,2}=\dfrac{ 1 \pm 9 }{ 2 } \ \\ b_{1,2}=0.5 \pm 4.5 \ \\ b_{1}=5 \ \\ b_{2}=-4 \ \\ \ \\ \text{ Factored form of the equation: } \ \\ (b -5) (b +4)=0 \ \\ b=b_{1}=5 \ \\ a=b^2=5^2=25$
Checkout calculation with our calculator of quadratic equations.
$b={b}_{1}=5$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Please write to us with your comment on the math problem or ask something. Thank you for helping each other - students, teachers, parents, and problem authors.
Tips to related online calculators
Looking for help with calculating roots of a quadratic equation?
Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?
## Next similar math problems:
• Ball game
Richard, Denis and Denise together scored 932 goals. Denis scored 4 goals over Denise but Denis scored 24 goals less than Richard. Determine the number of goals for each player.
• Stones 3
Simiyu and Nasike each collected a number of stones in an arithmetic lesson. If Simiyu gave Nasike 5 stones, Nasike would have twice as many stones as Simiyu. If initially, Simiyu had five stones less than Nasike how many stones did each have?
• The product
The product of a number plus that number and its inverse is two and one-half. What is the inverse of this number
• Three workshops
There are 2743 people working in three workshops. In the second workshop works 140 people more than in the first and in third works 4.2 times more than the second one. How many people work in each workshop?
• Children
The group has 42 children. There are 4 more boys than girls. How many boys and girls are in the group?
• Legs
Cancer has 5 pairs of legs. The insect has 6 legs. 60 animals have a total of 500 legs. How much more are cancers than insects?
• Variable
Find variable P: PP plus P x P plus P = 160
• Three unknowns
Solve the system of linear equations with three unknowns: A + B + C = 14 B - A - C = 4 2A - B + C = 0
• Elimination method
Solve system of linear equations by elimination method: 5/2x + 3/5y= 4/15 1/2x + 2/5y= 2/15
• Discriminant
Determine the discriminant of the equation: ?
• Roots
Determine the quadratic equation absolute coefficient q, that the equation has a real double root and the root x calculate: ?
• Linsys2
Solve two equations with two unknowns: 400x+120y=147.2 350x+200y=144
• Solve 3
Solve quadratic equation: (6n+1) (4n-1) = 3n2
• Men, women and children
On the trip went men, women and children in the ratio 2:3:5 by bus. Children pay 60 crowns and adults 150. How many women were on the bus when a bus was paid 4,200 crowns?
|
2020-08-12 01:25:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4163535237312317, "perplexity": 2285.8032675552695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738858.45/warc/CC-MAIN-20200811235207-20200812025207-00154.warc.gz"}
|
http://mathhelpforum.com/calculus/181663-taking-average-two-average-rates-change-print.html
|
# Taking the average of two average rates of change?
• May 25th 2011, 07:18 PM
CalculusABStudent
Taking the average of two average rates of change?
I am working on a math Jeopardy project for my Calc AB class and I wanted to write a question similar to the following one, but I need to know how to solve it first. Could anyone show me the work to get the answer of 13% per year?
Here is the Question out of my Calc book: The table shows the estimated percentage P of the population of Europe that use cell phones. (Midyear estimates are given).
Year: 1998 1999 2000 2001 2002 2003
P: 28 39 55 68 77 83
Estimate the instantaneous rate of growth in 2000 by taking the average of two average rates of change. What are its units?
So my attempt at this problem was taking the rate of change of 1999 and then 2001 and then taking the rate of change between those two to get the average rate of change of those average rates of change at 2000? ah help please. I was totally off.
• May 25th 2011, 08:32 PM
integral
Just say $\frac{F(b)-F(a)}{b-a}$
Which gives you average slope.
f(x)= percent
f'(x)= rate of percent change
f''(x)=rate of the rate of percent change.
You were looking for an approx of f''(x) I think.
You need f'(x)
b=2001
a=2000
• May 25th 2011, 08:44 PM
CalculusABStudent
I'm still not getting the right answer. I've done 55-39 to get 16. Then I did 68-55 to get 13. Then added 16 and 13 and divided by two and got 14.5. However the answer is 13. I tried a few other methods and got the wrong answer also.
EDIT: nevermind I got it. thank you
• May 25th 2011, 08:56 PM
integral
(68-55)/(2001-2000)
Sorry, I gave you the wrong b and a. That is the answer though. No more work needs to be done.
• May 26th 2011, 03:05 AM
Ackbeet
Is this project for a grade?
• May 26th 2011, 06:11 AM
mr fantastic
Quote:
Originally Posted by CalculusABStudent
I am working on a math Jeopardy project for my Calc AB class and I wanted to write a question similar to the following one, but I need to know how to solve it first. Could anyone show me the work to get the answer of 13% per year?
Here is the Question out of my Calc book: The table shows the estimated percentage P of the population of Europe that use cell phones. (Midyear estimates are given).
Year: 1998 1999 2000 2001 2002 2003
P: 28 39 55 68 77 83
Estimate the instantaneous rate of growth in 2000 by taking the average of two average rates of change. What are its units?
So my attempt at this problem was taking the rate of change of 1999 and then 2001 and then taking the rate of change between those two to get the average rate of change of those average rates of change at 2000? ah help please. I was totally off.
|
2016-12-05 17:50:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440321683883667, "perplexity": 432.99344267103663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541773.2/warc/CC-MAIN-20161202170901-00153-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://0-network.bepress.com.library.simmons.edu/physical-sciences-and-mathematics/mathematics/page9
|
# Mathematics Commons™
## All Articles in Mathematics
17,867 full-text articles. Page 9 of 528.
On Revelation Transforms That Characterize Probability Distributions, 2019 Victoria University of Wellington
#### On Revelation Transforms That Characterize Probability Distributions, Stefanka Chukova, Boyan N. Dimitrov, Jean Pierre Dion
##### Boyan Dimitrov
A characterization of exponential, geometric and of distributions with almost-lack-of-memory property, based on the revelation transform of probability distributions and relevation of random variables is discussed. Known characterizations of the exponential distribution on the base of relevation transforms given by Grosswald et al. [4], and Lau and Rao [7] are obtained under weakened conditions and the proofs are simplified. A characterization the class of almost-lack-of-memory distributions through the relevation is specified.
2019 Kettering University
#### Measuring Dependence In Uncertainty Should Start In The Introduction To Probability And Statistics, Boyan N. Dimitrov
##### Boyan Dimitrov
No abstract provided.
Local Dependence Structure Of The Bivariate Normal Distribution, 2019 Kettering University
#### Local Dependence Structure Of The Bivariate Normal Distribution, Boyan N. Dimitrov, Kreg Deachin
##### Boyan Dimitrov
In several previous publications we developed an idea how probability tools can be used to measure strength of dependence between random events. In this talk we use it for measuring magnitude of local dependence inside the normally distributed random variables, using the regression coefficients in case of random events. Short illustrations (graphics and tables) are showing the use of these measures in already known popular Bivariate Normal distribution with different correlation values.
In Memoriam : Elart Von Collan, 2019 Kettering University
#### In Memoriam : Elart Von Collan, Boyan N. Dimitrov
##### Boyan Dimitrov
No abstract provided.
2019 University of New Mexico - Main Campus
#### L^{\Infty}-Estimates Of The Solution Of The Navier-Stokes Equations For Periodic Initial Data, Santosh Pathak
##### Mathematics & Statistics ETDs
In this doctoral dissertation, we consider the Cauchy problem for the 3D incompressible Navier-Stokes equations. Here, we are interested in a smooth periodic solution of the problem which happens to be a special case of a paper by Otto Kreiss and Jens Lorenz. More precisely, we will look into a special case of their paper by two approaches. In the first approach, we will try to follow the similar techniques as in the original paper for smooth periodic solution. Because of the involvement of the Fourier expansion in the process, we encounter with some intriguing factors in the periodic case ...
Non-Sparse Companion Matrices, 2019 Redeemer University College
#### Non-Sparse Companion Matrices, Louis Deaett, Jonathan Fischer, Colin Garnett, Kevin N. Vander Meulen
##### Electronic Journal of Linear Algebra
Given a polynomial $p(z)$, a companion matrix can be thought of as a simple template for placing the coefficients of $p(z)$ in a matrix such that the characteristic polynomial is $p(z)$. The Frobenius companion and the more recently-discovered Fiedler companion matrices are examples. Both the Frobenius and Fiedler companion matrices have the maximum possible number of zero entries, and in that sense are sparse. In this paper, companion matrices are explored that are not sparse. Some constructions of non-sparse companion matrices are provided, and properties that all companion matrices must exhibit are given. For example, it is ...
2019 The University of Southern Mississippi
#### Krylov Subspace Spectral Methods With Non-Homogenous Boundary Conditions, Abbie Hendley
##### Master's Theses
For this thesis, Krylov Subspace Spectral (KSS) methods, developed by Dr. James Lambers, will be used to solve a one-dimensional, heat equation with non-homogenous boundary conditions. While current methods such as Finite Difference are able to carry out these computations efficiently, their accuracy and scalability can be improved. We will solve the heat equation in one-dimension with two cases to observe the behaviors of the errors using KSS methods. The first case will implement KSS methods with trigonometric initial conditions, then another case where the initial conditions are polynomial functions. We will also look at both the time-independent and time-dependent ...
Long-Dose Intensive Therapy Is Necessary For Strong, Clinically Significant, Upper Limb Functional Gains And Retained Gains In Severe/Moderate Chronic Stroke, 2019 Malcom Randall Gainesville DVA Medical Center
#### Long-Dose Intensive Therapy Is Necessary For Strong, Clinically Significant, Upper Limb Functional Gains And Retained Gains In Severe/Moderate Chronic Stroke, Janis J. Daly, Jessica P. Mccabe, John P. Holcomb, Michelle Monkiewicz, Jennifer Gansen, Svetlana Pundik
##### Mathematics Faculty Publications
Background. Effective treatment methods are needed for moderate/severely impairment chronic stroke. Objective. The questions were the following: (1) Is there need for long-dose therapy or is there a mid-treatment plateau? (2) Are the observed gains from the prior-studied protocol retained after treatment? Methods. Single-blind, stratified/randomized design, with 3 applied technology treatment groups, combined with motor learning, for long-duration treatment (300 hours of treatment). Measures were Arm Motor Ability Test time and coordination-function (AMAT-T, AMAT-F, respectively), acquired pre-/posttreatment and 3-month follow-up (3moF/U); Fugl-Meyer (FM), acquired similarly with addition of mid-treatment. Findings. There was no group difference in ...
Euler’S Calculation Of The Sum Of The Reciprocals Of Squares, 2019 Ursinus College
#### Euler’S Calculation Of The Sum Of The Reciprocals Of Squares, Kenneth M. Monks
##### Calculus
No abstract provided.
Active Prelude To Calculus, 2019 Grand Valley State University
#### Active Prelude To Calculus, Matthew Boelkins
##### Open Textbooks
Active Prelude to Calculus is designed for college students who aspire to take calculus and who either need to take a course to prepare them for calculus or want to do some additional self-study. Many of the core topics of the course will be familiar to students who have completed high school. At the same time, we take a perspective on every topic that emphasizes how it is important in calculus. This text is written in the spirit of Active Calculus and is especially ideal for students who will eventually study calculus from that text. The reader will find that ...
2019 John Carroll University
#### Algebraic Topics In The Classroom – Gauss And Beyond, Lisa Krance
##### Masters Essays
No abstract provided.
2019 John Carroll University
#### Introduction Of Infinite Series In High School Level Calculus, Ericka Bella
##### Masters Essays
No abstract provided.
2019 Western Kentucky University
#### Development Of A Karst Tourism Management Index To Assess Tourism-Driven Degradation Of Protected Karst Sites, Keith R. Semler
##### Masters Theses & Specialist Projects
The intent of this research was to create and evaluate a karst tourism management index (KTMI). This index is intended to be a new management tool designed to quantify environmental disturbances caused specifically by tourism activities in karst regions, particularly show caves and springs. In an effort to assess the effectiveness of the index as a management tool in karst terrains, after development, the index was applied to six case study sites. A review of the management policies at each study site was conducted with the use of standard policy critique methods and semistructured interviews with managers at the study ...
Properties Of Functionally Alexandroff Topologies And Their Lattice, 2019 Western Kentucky University
#### Properties Of Functionally Alexandroff Topologies And Their Lattice, Jacob Scott Menix
##### Masters Theses & Specialist Projects
This thesis explores functionally Alexandroff topologies and the order theory asso- ciated when considering the collection of such topologies on some set X. We present several theorems about the properties of these topologies as well as their partially ordered set.
The first chapter introduces functionally Alexandroff topologies and motivates why this work is of interest to topologists. This chapter explains the historical context of this relatively new type of topology and how this work relates to previous work in topology. Chapter 2 presents several theorems describing properties of functionally Alexandroff topologies ad presents a characterization for the functionally Alexandroff topologies ...
Copula-Based Zero-Inflated Count Time Series Models, 2019 Old Dominion University
#### Copula-Based Zero-Inflated Count Time Series Models, Mohammed Sulaiman Alqawba
##### Mathematics & Statistics Theses & Dissertations
Count time series data are observed in several applied disciplines such as in environmental science, biostatistics, economics, public health, and finance. In some cases, a specific count, say zero, may occur more often than usual. Additionally, serial dependence might be found among these counts if they are recorded over time. Overlooking the frequent occurrence of zeros and the serial dependence could lead to false inference. In this dissertation, we propose two classes of copula-based time series models for zero-inflated counts with the presence of covariates. Zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB), and zero-inflated Conway-Maxwell-Poisson (ZICMP) distributed marginals of the ...
List-Distinguishing Cartesian Products Of Cliques, 2019 University of Colorado, Denver
#### List-Distinguishing Cartesian Products Of Cliques, Michael Ferrara, Zoltan Füredi, Sogol Jahanbekam, Paul Wenger
##### Sogol Jahanbekam
The distinguishing number of a graph G, denoted D(G), is the minimum number of colors needed to produce a coloring of the vertices of G so that every nontrivial isomorphism interchanges vertices of different colors. A list assignment L on a graph G is a function that assigns each vertex of G a set of colors. An L-coloring of G is a coloring in which each vertex is colored with a color from L(v). The list distinguishing number of G, denoted Dℓ(G) is the minimum k such that every list assignment L that assigns a list ...
Localization Theory In An Infinity Topos, 2019 The University of Western Ontario
#### Localization Theory In An Infinity Topos, Marco Vergura
##### Electronic Thesis and Dissertation Repository
We develop the theory of reflective subfibrations on an ∞-topos E. A reflective subfibration L on E is a pullback-compatible assignment of a reflective subcategory D_X ⊆ E/X with associated localization functor L_X, for every X in E. Reflective subfibrations abound in homotopy theory, albeit often disguised, e.g., as stable factorization systems. The added properties of a reflective subfibration L on E compared to a mere reflective subcategory of E are crucial for most of our results. For example, we can prove that L-local maps (i.e., those maps p in D_X for some X in E) admit a ...
Two Games Displayed By Butler’S 2017 Celebration Of Mind, 2019 Butler University
#### Two Games Displayed By Butler’S 2017 Celebration Of Mind, Jeremiah Farrell
##### Jeremiah Farrell
Jeremiah's two games displayed by Butler's 2017 Celebration of Mind.
Flying Saucer, 2019 Butler University
#### Flying Saucer, Jeremiah Farrell, Karen Farrell
##### Jeremiah Farrell
Jeremiah's puzzle "Flying Saucer", which was exchanged at the 2013 International Puzzle Party in Washington, DC. 100 puzzle designers create 100 copies of their puzzle and pass it out at the party and exchange them. This puzzle is also manufactured by Walter Hoppe as "Flying Saucer".
Continuation Of Polyanalytic Functions, 2019 Samarkand State University
#### Continuation Of Polyanalytic Functions, T. Ishankulov, G. Norqulova
##### Scientific Journal of Samarkand University
We consider the problem of continuation the � − analytic function in to a domain by values of its sequential derivatives up to the (� − 1) -th order on a part of the boundary. The problem of inversion of a Cauchy type integral to a Cauchy integral for such functions is also considered.
|
2019-10-18 21:30:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4667035639286041, "perplexity": 3115.4342304658976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00085.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-concepts-through-functions-a-unit-circle-approach-to-trigonometry-3rd-edition/chapter-12-counting-and-probability-section-12-1-counting-12-1-assess-your-understanding-page-867/26
|
## Precalculus: Concepts Through Functions, A Unit Circle Approach to Trigonometry (3rd Edition)
$80,000$ different 5-digit numbers numbers
There are 8 selections for first choice ( because 0 and 1 are excluded), and 10 selections each for the second, third, fourth and fifth choices. Using multiplication principle of counting, the number of 5-digit numbers that can be formed with the given restrictions is: $= 8\times10\times10\times10\times10 \\= 80,000$.
|
2018-12-16 05:27:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7993189096450806, "perplexity": 589.0326747366794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827281.64/warc/CC-MAIN-20181216051636-20181216073636-00634.warc.gz"}
|
https://dsp.stackexchange.com/tags/fourier-transform/new
|
# Tag Info
1
Indeed there are two things you have to know. First, it can be shown that the continuous-time Fourier transform can be obtained from the continuous-time Fourier series by letting the period $T$ go to infinity. Second, formally speaking the Fourier transform integral for periodic signals do not converge, hence do not exist. The solution is a generalisation ...
0
This is typically done using a segmented overlap add method or sometimes also refered to as a block convolver. Let's assume your block size if 512 (makes the numbers a little easier). Chop up your impulse response into 32 blocks of 512 samples each. Zero pad each block to 1024 samples and FFT. You know have 32 filters $H_0(z) ... H_{31}(z)$ On each new ...
0
Because they made a mistake. The very first equation on the right-hand side is wrong. It should be $$\textrm{DTFT}\big\{x[2n+1]\big\}=\sum_{n=-\infty}^{\infty}x[2n+1]e^{-jn\omega}=\sum_{n\textrm{ odd}}x[n]e^{-j(n-1)\omega/2}\tag{1}$$ Using the trick they suggested, $(1)$ can be written as $$\textrm{DTFT}\big\{x[2n+1]\big\}=\frac{e^{j\omega /2}}{2}\left[\... 1 The IDTFT of X(e^{j\omega})=1 is indeed$$x[n]=\frac{\sin(n\pi)}{n\pi}\tag{1}$$Now, what happens for indices n\neq 0? As it turns out, you can safely rewrite (1) as$$x[n]=\delta[n]\tag{2}$$where \delta[n] is the discrete-time unit impulse. (HINT: think about where the zeros of \sin(x) are). 0 According to the documentation, the coherence between two signals x(t) and y(t) is defined as C_{xy}(w) = \frac{|P_{xy}(w)|^2}{P_x(w) P_y(w)}, where P_x(w) and P_y(w) are power spectral density estimates of x and y and P_{xy}(w) is an estimate of the cross-spectral density. In your first experiment, you put in two identical signals x(t) = ... 0 For the sake of simplicity, I'll explain the 1-D case; the 2-D case is completely analogous. Let x[n] be a finite length sequence with n\in[0,N-1]. Its discrete Fourier transform (DFT) is$$X[k]=\sum_{n=0}^{N-1}x[n]e^{-j\frac{2\pi}{N} kn}\tag{1}$$The sequence x[n] can be obtained from X[k] via the inverse DFT (IDFT):$$x[n]=\frac{1}{N}\sum_{k=0}^{...
1
Peaks are determined by energy in a given frequency range which can be more or less visible depending on the dimension of frequency bins.
1
First of all, sampling frequency and sampling rate are synonymous. You mean sampling frequency of 44.1 kHz with a data length of 20 ms. Second of all, 20 ms of data at 44.1 kHz will give you 882 points. Not enough for a 1024-point FFT. You will either need to upsample to 51.2 kHz, that way 20 ms will give you 1024 points. Or you could append 142 zeroes to ...
1
Note that in general the Fourier transform of a function is a complex-valued function, so in general it is not only positive or negative. Roughly speaking, the magnitude of the Fourier transform says something about the presence of certain frequencies components in a signal, regardless of the phase (or sign, in the real-valued case). The phase determines ...
2
Since an FFT is a linear operator, adding up the complex results of a sequence of FFTs of short windows is the same as doing a single short FFT on the the vector addition of all those short windows. Note that for signals that are exactly integer periodic in the FFT width (sequential 0% overlapped windows), the vector addition will constructively interfere. ...
1
You can average complex cross spectra or cross correlation like Welch. For something like coherence, you do both, complex averages and power averages. It can tell you a lot about a system as opposed to a single signal.
0
is there any merit to using something like this? I don't think so. Unless your signal is somehow phase locked with your analysis window, the individual complex Fourier coefficients will just cancel each other simply because of phase variations.
6
There can't be. One man's signal is another man's noise. In fact, a communication system making the absolute most of a bandwidth would be spectrally white, just like white noise, and hence be indistinguishable from noise to anyone but the receiver for that specific system.
0
Thinking in terms of convolution with shifted impulses help. Multiplication in the time domain corresponds to convolution in the frequency domain. Your example is a classic showcase of frequency/band shifting using a "carrier" such as a cosine (or sine) or a complex exponential. Now, note, that the Fourier transform of these carriers are actually shifted ...
0
The amplitude you “see” in a spectrum is the result of a narrow band filter, a bin of a DFT, or a pixel at some finite DPI. All of those cover some non-zero bandwidth. So what you see does not have infinitesimal bandwidth (or less).
1
You can see the non-zero magnitude of the spectrum, but the interpretation that the signal contains a sinusoidal component at a specific frequency where the magnitude is non-zero is wrong. If there is a sinusoidal component present, then there is a Dirac impulse at the respective frequency. A non-zero Fourier transform at a certain frequency is not ...
1
Let me summarize my understanding of what you're trying to do. You have a real-valued sequence $x[n]$, obtained by sampling a real-valued continuous function, and you computed its DFT $X[k]$. The sequence can be expressed in terms of its DFT coefficients: $$x[n]=\frac{1}{N}\sum_{k=0}^{N-1}X[k]e^{j2\pi nk/N},\qquad n\in[0,N-1]\tag{1}$$ where $N$ is the ...
2
Incidentally, DFT is the only bijective linear transformation that exchanges convolution and termwise multiplication (up to permutation of the coefficients, obviously). This is not difficult to prove, but I have found no reference on this result before I spelled it out in Music Through Fourier Space, Thm. 1.11 (Springer 2016). It is messier in the continuous ...
0
I have looked at your image (and didn't read the post) and its explanation is as follows. Let the data in the blue curve be $x[n]$, and the data in the red curve be $y[n]$; then it can be seen and shown that: $$y[n] = \tfrac12 ( x[n] + (-1)^n x[n] )$$ In the DTFT domain this relationship becomes : $$Y(e^{j\omega}) = \tfrac12 \big( X(e^{j\omega}) + X(... 0 I'm going to attempt to answer my own question based on some of the comments and my own further investigation Is this a valid procedure? It's not invalid, but it does not necessarily achieve the outcome I am looking for. Welch is used to improve the SNR in a DFT by averaging shorter length DFT's. The more shorter DFT's you use the better the outcome, but ... 1 when I read the original autotune patent few steps made me understand that everything was done in time domain (in the past, I don't know today), they didn't mention anything about overlap and add, it made me wonder if the pitch detector is so good that they didn't have to overlap and add, could they always skip or add periods in the exact position? (just out ... 2 First of all, in order to have a Fourier transform, the original signal has to be multiplied by the unit step function:$$x(t)=e^{-t}u(t)$$Giving indeed the transform:$$X(\omega)=\frac{1}{j\omega+1}$$The way to plot a complex function on the frequency domain is by finding both its amplitude and its phase, and drawing one graph for each. For the ... 0 Although perhaps not state-of-the-art, FFT based phase vocoder algorithms have been used for time-pitch modification (with formant shift artifacts). And time-pitch modification can be used to auto-tune. 1 This information was provided by the user "Birdwes", but he didn't have enough reputation to post it himself so I will post it here for him because it does seem relevant and useful. "I do not have enough points in this forum to add a comment, so I'm doing it here: take a look at the source code for Accord.Math Hilbert Transform and you will see why this can ... 1 You can use the 2D DFT formula$$H(\omega_1,\omega_2) = \sum_{n_1} \sum_{n_2} h[n_1,n_2] e^{-j(\omega_1 n_1 + \omega_2 n_2)} $$and simply the trigonometric algebra to get a closed form analytic expression for the 2D-DTFT. However, as @LaurentDuval has already mentioned, your 3x3 kernel is separable and one set of 1D filters is this$$f[n_1] = [\frac{1}{...
6
To answer the second question, in digital communications there is a technique in use in cellphones right now that makes good use of applying the IFFT to a time-domain signal. OFDM applies an IFFT to a time-domain sequence of data at the transmitter, then reverses that with an FFT at the receiver. While the literature likes to use IFFT->FFT, it really makes ...
1
I think that you are splitting your signal $x[n]$ into $N$ STFT intervals which you are overlapping by 50%, then on each interval finding the Power Spectral Density (PSD) with Welch's Method. If this is correct I think this method is valid, but I think you might be confused by the 'overlapping' because there are two at play here. When you use Welch's ...
16
Whilst taking the Fourier transform directly twice in a row just gives you a trivial time-inversion that would be much cheaper to implement without FT, there is useful stuff that can be done by taking a Fourier transform, applying some other operation, and then again Fourier transforming the result of that. The best-known example is the autocorrelation, ...
8
"Is there any practical application?" Definitely yes, at least to check code, and bound errors. "In theory, theory and practice match. In practice, they don't." So, mathematically, no, as answered by Matt. Because (as already answered), $\mathcal{F}\left(\mathcal{F}\left(x(t)\right)\right)=x(-t)$ (up to a potential scaling factor). However, it can be ...
17
No, taking the Fourier transform twice is equivalent to time inversion (or inversion of whatever dimension you're in). You just get $x(-t)$ times a constant which depends on the type of scaling you use for the Fourier transform. The inverse Fourier transform applied to a time domain signal just gives the spectrum with frequency inversion. Have a look at ...
11
2D Fourier transform (2D DFT) is used in image processing since an image can be seen as a 2D signal. E.g. for a grayscale image $I$, $I(x,y)=z$, that means that at the coordinates $x$ and $y$ the image has intensity value z. Look at this for example: https://ch.mathworks.com/help/matlab/ref/fft2.html Try this: x=imread('cameraman.tif'); X=fft2(fft2(x)); ...
Top 50 recent answers are included
|
2019-09-19 20:12:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8361690044403076, "perplexity": 500.15270732354065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00408.warc.gz"}
|
http://anynowhere.com/bb/posts.php?t=3899
|
. /../Compiling noctis???/ 1
written by Dorino on Mar 04, 2008 22:38
I really want to mess with Noctis's source code.However, I can't get it to compile. Can someone with experience give me the step by step?
doing pushups
written by Megagun on Mar 06, 2008 14:20
Hmmm.. Compiling Noctis is indeed a bit of a chore, yes..Well, you'll need Borland C compiler, version 3.1 for DOS, and then.. err.... Well, honestly, I can't even remember; it's been such an awful long time ago.. :/If you want to compile Noctis IV CE, though (and accept the buggyness), you can either:1- Try to compile it for DOSget borland C compiler, v3.1 for dos and put it in c:\bc.3, then put NICE in c:\noctis and its source in c:\noctis\source, and then run the compile .bat files that are in the source folder.2 - Try to compile it for WINDOWSget Digital Mars C/C++ compiler (free!) from http://www.digitalmars.com/download/dmcpp.htmlput it in c:\dm (so that you get c:\dm\bin\...), then put NICE in c:\noctis and its source in c:\noctis\source, and then run the compile .bat file intended for compiling the Windows version of NICE (shoudn't be hard to find out which)...I'm sorry I can't be of any further assistance with compiling, though, but feel free to drop on IRC so that we can figure out what's wrong and how we can get the thing to compile on your PC...
|
2021-02-24 17:06:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804191529750824, "perplexity": 4457.234445584772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347293.1/warc/CC-MAIN-20210224165708-20210224195708-00398.warc.gz"}
|
http://www.zora.uzh.ch/id/eprint/146458/
|
# Measurement of the CP violation parameter $A_{\Gamma}$ in $D^0 \to K^+K^-$ and $D^0 \to \pi^+\pi^-$ decays
LHCb Collaboration; Bernet, R; Müller, K; Serra, N; Steinkamp, O; Straumann, U; Vollhardt, A; et al (2017). Measurement of the CP violation parameter $A_{\Gamma}$ in $D^0 \to K^+K^-$ and $D^0 \to \pi^+\pi^-$ decays. Physical Review Letters, 118(26):261803.
## Abstract
Asymmetries in the time-dependent rates of $D^0 \to K^+K^−$ and $D^0 \to \pi^+\pi^-$ decays are measured in a pp collision data sample collected with the LHCb detector during LHC Run 1, corresponding to an integrated luminosity of 3 $fb^{−1}$. The asymmetries in effective decay widths between $D^0$ and $\overline{D}^0$ decays, sensitive to indirect CP violation, are measured to be $A_{\Gamma}(K^+K^−) = (−0.30 \pm 0.32 \pm 0.10) \times 10^{−3}$ and $A_{\Gamma}(\pi^+\pi−) = (0.46 \pm 0.58 \pm 0.12) \times 10^{−3}$, where the first uncertainty is statistical and the second systematic. These measurements show no evidence for CP violation and improve on the precision of the previous best measurements by nearly a factor of two.
## Abstract
Asymmetries in the time-dependent rates of $D^0 \to K^+K^−$ and $D^0 \to \pi^+\pi^-$ decays are measured in a pp collision data sample collected with the LHCb detector during LHC Run 1, corresponding to an integrated luminosity of 3 $fb^{−1}$. The asymmetries in effective decay widths between $D^0$ and $\overline{D}^0$ decays, sensitive to indirect CP violation, are measured to be $A_{\Gamma}(K^+K^−) = (−0.30 \pm 0.32 \pm 0.10) \times 10^{−3}$ and $A_{\Gamma}(\pi^+\pi−) = (0.46 \pm 0.58 \pm 0.12) \times 10^{−3}$, where the first uncertainty is statistical and the second systematic. These measurements show no evidence for CP violation and improve on the precision of the previous best measurements by nearly a factor of two.
## Statistics
### Citations
Dimensions.ai Metrics
2 citations in Web of Science®
1 citation in Scopus®
### Altmetrics
Detailed statistics
|
2018-04-19 17:28:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8565076589584351, "perplexity": 1608.037060752322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937015.7/warc/CC-MAIN-20180419165443-20180419185443-00027.warc.gz"}
|
https://quantumcomputing.stackexchange.com/questions/14400/how-to-represent-the-state-vector-form-of-a-qubit-in-density-matrix-representati?noredirect=1
|
How to represent the state vector form of a qubit in density matrix representation? [duplicate]
While I'm studying state vector and density matrix. I wonder how to write qubit state as density matrix. qubit state can be represented with state vector form. But how about density matrix?
A qubit state $$|\psi\rangle \in \mathbb{C}^{2^n} \ \ \textrm{for} \ \ n\in\mathbb{N}$$ can be represented as a density operator/matrix as $$\rho = | \psi \rangle \langle \psi |$$.
For example: If $$|\psi \rangle = |1\rangle$$ then $$\rho = |\psi \rangle \langle \psi | = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0\\ 0 & 1 \end{pmatrix}$$ is the density matrix representation of $$|\psi \rangle$$.
Thus, density matrix representation offers a general way of expressing a quantum state. As you can see, a pure state $$|\psi \rangle$$ can always be converted into a density matrix representation where the matrix is of rank 1. That is density matrix generalize the idea of state vector. In fact, the pure states (state vectors) are just the extreme points of the state space. In term of 1 qubit, you can think that the 1 qubit state vector $$|\psi \rangle = \alpha | 0 \rangle + \beta |1 \rangle$$ are just the states on the surface of the Blochsphere. The states inside the Blochsphere are known as mixed states.
To go even deeper, states are positive linear functional of unit norm.
If we have a pure state:
$$|\psi\rangle = \alpha |0 \rangle + \beta |1\rangle$$
The corresponding density matrix:
$$\rho = |\psi \rangle \langle \psi| = (\alpha |0 \rangle + \beta |1\rangle)(\alpha^* \langle 0 | + \beta^* \langle 1|) = \\= |\alpha|^2|0 \rangle \langle 0| + \alpha \beta^*|0 \rangle \langle 1| + \alpha^* \beta|1 \rangle \langle 0|+ |\beta|^2 |1 \rangle \langle 1| = \begin{pmatrix} |\alpha|^2 & \alpha \beta^* \\ \alpha^* \beta & |\beta|^2 \end{pmatrix}$$
If the state is not pure and we have $$p_1$$ probability of haveing $$\psi_1$$ state and with $$p_2$$ probability of having $$\psi_2$$ state (and this can be extended to $$n$$ cases: $$p_n$$ probability of $$\psi_n$$ state):
$$\rho = p_1 |\psi_1 \rangle \langle \psi_1| + p_2 |\psi_2 \rangle \langle \psi_2| = p_1 \begin{pmatrix} |\alpha_1|^2 & \alpha_1 \beta_1^* \\ \alpha_1^* \beta_1 & |\beta_1|^2 \end{pmatrix} + p_2 \begin{pmatrix} |\alpha_2|^2 & \alpha_2 \beta_2^* \\ \alpha_2^* \beta_2 & |\beta_2|^2 \end{pmatrix} = \\ =\begin{pmatrix} p_1|\alpha_1|^2 + p_2|\alpha_2|^2 & p_1 \alpha_1 \beta_1^* + p_2 \alpha_2 \beta_2^* \\ p_1 \alpha_1^* \beta_1 + p_2 \alpha_2^* \beta_2 & p_1|\beta_1|^2 + p_2 |\beta_2|^2 \end{pmatrix}$$
|
2021-10-17 04:07:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9611096382141113, "perplexity": 274.9360132985033}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00184.warc.gz"}
|
https://www.physicsforums.com/threads/i-woke-up-wondering.531289/
|
I woke up wondering
1. Sep 18, 2011
UnBoxedCat
When is now?
The further away an object is, the further back in time it appears to be. So does this mean the closer an object is, the more closer it is to now?
Which brought up another question, as i lapsed between dreamland and the noise of the dogs barking and running around with squeeky toys! If there are 2 objects of equal size travelling towards each other at equal velocity (i imagined 2 spheres), and at the same time decreasing in size, at the same rate... would they ever meet?
As you probably guess by the simplicity and probably ignorant questions... I'm new to physics (as in, I've never studied or read anything about the subject). But I just had to ask these questions before they fried my brain. This seems to be the best place to ask.
Scott
2. Sep 18, 2011
high noon
about those spheres, just assume some radius and velocity and calculate urself if they will reach each-other....
3. Sep 18, 2011
klimatos
For your first question, both time and distance are considered to be dimensions, but distance has two directions whereas time only has one (i. e., "time's arrow").
For your second question, this appears to be a variation on Zeno's First Paradox. Google on it for some interesting discussions.
By the way, questions like these will not fry your brain. They will stretch it!
4. Sep 18, 2011
nonequilibrium
A bit vague. Are you referring to the fact that light that reaches you from far away is "old light" in the sense that is was sent off a time of $\frac{\textrm{distance}}{\textrm{speed of light}}$ seconds ago?
5. Sep 18, 2011
Disinterred
For the spheres problem, they will meet. Just replace either of sphere with a point particle. Two point particles traveling head on towards each other will meet if their velocities are equal, opposite in direction and constant.
The reason I am considering point particles here is that if the spheres keeps decreasing, they will eventually approach the size of a point particle.
Last edited: Sep 18, 2011
6. Sep 18, 2011
Staff: Mentor
Consider these two points;
-I've I post a letter to you and you receive it three days later am I three days in the past?
-Two balloons are blown up then released without having a knot tied in them. They are propelled towards each other expanding air as they go. What will happen?
7. Sep 18, 2011
sophiecentaur
All news is old news. It takes time for all information to travel from A to B so, whatever you find out about what's happening somewhere, it's really information about how things were some while ago. Even for a spacing of 1m, there is a delay of about 3ns.
Take that Supernova that 'we' observed last week. IS it happening as we watch? One could say so - the same as when we watch a feature film. The action we see on the film is 'happening' now, for us.
8. Sep 22, 2011
UnBoxedCat
Some interesting and mind boggling stuff to consider. Thank you. I'm still not sure if I'm further from, or nearer to the answers.
When we measure the light distance from the furthest galaxies... is this the measure of how old the universe is?
If the universe were to shrink, at what point would everything become nearest the point of now? At the point of were galaxies are the size of particles?
|
2017-09-26 00:29:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5870992541313171, "perplexity": 955.180088647856}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693866.86/warc/CC-MAIN-20170925235144-20170926015144-00269.warc.gz"}
|
https://plainmath.net/advanced-physics/103260-which-photon-is-more-energeti
|
pennywise2374ryv
2023-03-07
Which photon is more energetic red or violet and why?
### Answer & Explanation
enkestaxn2
The photon is the fundamental unit of the visible range of light. In certain dimensions, visible light behaves as a wave phenomenon, while in many others, it behaves more like a stream of slightly raised, submicroscopic particulates. A photon's energy is frequently measured in electron volts. ($1\mathrm{eV}=1.602×{10}^{-12}\mathrm{erg}$). From 400 nm at the violet end of the spectrum to 700 nm at the red end, light has a range of wavelengths. Compared to red light, violet light has a shorter wavelength. Because of this, violet has a higher frequency than red. The violet photon is therefore more powerful than the red photon.
|
2023-03-28 01:35:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5670127272605896, "perplexity": 695.8982926312218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00607.warc.gz"}
|
https://biol-300-wiki-docs.readthedocs.io/en/latest/build-from-scratch/copy-contents.html
|
# 11. Copy Old Wiki Contents¶
ssh -p 8015 hjc@dynamicshjc.case.edu
2. Check for and install system updates on the virtual machine:
sudo apt-get update
sudo apt-get autoremove
3. Start the old (2016) virtual machine if it is not already running.
4. Export the policy and unit pages from last year’s wiki. You may have decided after last spring’s semester ended to make some changes to these pages on either the live wiki (biol300.case.edu) or the development wiki (biol300dev.case.edu). To determine where the latest changes can be found, visit these pages to view the latest edits on each wiki:
If you made changes in parallel to both wikis that you want to keep, you may need to manually propogate some of those edits.
After determining where you should export pages from, visit Special:Export on that wiki (live wiki | dev wiki). Select these options,
• Check “Include only the current revision, not the full history”
• Do NOT check “Include templates”
• Check “Save as file”
and paste the names of the following pages into the text field:
BIOL 300 Wiki pages to export
Course policies
Class attendance
Team evaluations
Concepts and attitudes surveys
Problem benchmarks
Comments on other students' term paper benchmarks
Term Paper Template
Model Plan
Exemplary Model Plan
Selecting a published model to reproduce
Term paper proposal
Exemplary Term Paper Proposal 1
Exemplary Term Paper Proposal 2
Benchmark I: Introduction
Exemplary Introduction Draft 1
Exemplary Introduction Draft 2
Exemplary Introduction Draft 3
Exemplary Introduction Draft 4
Exemplary Introduction Draft 5
Benchmark II: Model Description
Exemplary Model Description Draft 1
Exemplary Model Description Draft 2
Exemplary Model Description Draft 3
Exemplary Model Description Draft 4
Exemplary Model Description Draft 5
Benchmark III: Results
Exemplary Results Draft 1
Exemplary Results Draft 2
Exemplary Results Draft 3
Exemplary Results Draft 4
Exemplary Results Draft 5
Benchmark IV: Discussion
Exemplary Discussion Draft 1
Exemplary Discussion Draft 2
Exemplary Discussion Draft 3
Exemplary Discussion Draft 4
Exemplary Discussion Draft 5
Term paper
Exemplary Final Term Paper 1
Exemplary Final Term Paper 2
Exemplary Final Term Paper 3
Exemplary Final Term Paper 4
Exemplary Final Term Paper 5
Exemplary Final Term Paper 6
Student Presentations
Presentation Guidelines
Rationale for modeling and modeling tools
Plagiarism
Don't Cheat
The rules apply to everyone, including you
Course syllabus
List of Discussion benchmarks
List of final term papers
Student list
Help:Editing
User:Hjc
User:Jpg18
Sandbox
Wiki Todo list
Hjc Todo list
Export the pages and save the XML file when given the option.
5. On the 2017 virtual machine, visit Special:Import and upload the XML file obtained from the 2016 wiki (choose “Import to default locations”).
6. Since it is possible that the list above is incomplete, visit Special:WantedPages to determine which pages are still missing.
There will be several missing pages related to the class that should be ignored. These are the pages begining with the slash (“/”) character, such as /Model Plan. These appear in the list because the Template:Termpaper page uses relative links for the term paper benchmanks.
If necessary, repeat steps 4-5 until no relevant pages are missing.
7. The following pages need to be updated with new dates, personnel, office hours times, etc., or out-dated contents need to be cleared:
8. If you’d like to add or remove term paper benchmark exemplars, now is a good time to do so. If you remove any, be sure to also delete associated files and images from the “Files to Import” directory.
Todo
The “Files to Import” directory is now hosted online. Add instructions for modifying it.
9. On the virtual machine, download and then import into the wiki a collection of images and files. This includes the wiki logo, favicon, and figures from benchmark exemplars:
wget -P ~ https://biol-300-wiki-docs.readthedocs.io/en/latest/_downloads/BIOL-300-Files-to-Import.tar.bz2
tar -xjf ~/BIOL-300-Files-to-Import.tar.bz2 -C ~
php /var/www/mediawiki/maintenance/importImages.php --user=Hjc ~/BIOL-300-Files-to-Import
sudo apache2ctl restart
rm -rf ~/BIOL-300-Files-to-Import*
If you’d like to view the collection of files, you can download it to your personal machine here: BIOL-300-Files-to-Import.tar.bz2
10. Todo
Update this step with instructions for adding files to the online “BIOL-300-Files-to-Import.tar.bz2” archive, and move the fetch_wiki_files.sh script to an external file in the docs source.
Visit Special:WantedFiles to determine which files are still missing. Files on this list that are struckthrough are provided through Wikimedia Commons and can be ignored.
If there are only a few files missing, download them individually from the old wiki, add them to the “Files to Import” directory, and upload them manually.
If there are many files missing (which is likely to happen if you added a new exemplar), you can use the following script to download them from the old wiki in a batch.
On your personal machine, create the file
vim fetch_wiki_files.sh
and fill it with the following:
fetch_wiki_files.sh
#!/bin/bash
# This script should be run with a single argument: the path to a file
# containing the names of the files to be downloaded from the wiki,
# each on its own line and written in the form "File:NAME.EXTENSION".
INPUT="$1" if [ ! -e "$INPUT" ]; then
echo "File \"$INPUT\" not found!" exit 1 fi # MediaWiki provides an API for querying the server. We will use it # to determine the URLs for directly downloading each file. WIKIAPI=https://biol300.case.edu/w/api.php # The result of our MediaWiki API query will be provided in JSON and # will contain some unnecessary meta data. We will use this Python # script to parse the query result. It specifically extracts only the # URLs for directly downloading each file. SCRIPT=" import sys, json data = json.loads(sys.stdin.read())['query']['pages'] for page in data.values(): if 'invalid' not in page and 'missing' not in page: print page['imageinfo'][0]['url'] " # Create the directory where downloaded files will be saved DIR=downloaded_wiki_files mkdir -p$DIR
# While iterating through the input line-by-line...
if [ "$FILENAME" ]; then echo -n "Downloading \"$FILENAME\" ... "
# ... query the server for a direct URL to the file ...
JSON=curl -s -d "action=query&format=json&prop=imageinfo&iiprop=url&titles=$FILENAME"$WIKIAPI
# ... parse the query result to obtain the naked URL ...
URL=echo $JSON | python -c "$SCRIPT"
if [ "$URL" ]; then # ... download the file cd$DIR
curl -s -O $URL cd .. echo "success!" else echo "not found!" fi fi done < "$INPUT"
Make the script executable:
chmod u+x fetch_wiki_files.sh
Copy the bulleted list of missing files found at Special:WantedFiles and paste them into this file:
vim wanted_files_list.txt
You can use this Vim command to clean up the list:
:%s/^\s*File:$$.*$$\%u200f\%u200e (\d* link[s]*)\$/File:\1/g
Finally, execute the script to download all the files in the list:
./fetch_wiki_files.sh wanted_files_list.txt
The downloaded files will be saved in the downloaded_wiki_files directory. Copy these to the “Files to Import” directory and upload them to the new wiki manually or using the importImages.php script used in step 9.
11. Protect every image and media file currently on the wiki from vandalism. Access the database:
mysql -u root -p wikidb
Enter the <MySQL password> when prompted. Execute these SQL commands (the magic number 6 refers to the File namespace):
INSERT IGNORE INTO page_restrictions (pr_page,pr_type,pr_level,pr_cascade,pr_expiry)
SELECT p.page_id,'edit','sysop',0,'infinity' FROM page AS p WHERE p.page_namespace=6;
SELECT p.page_id,'move','sysop',0,'infinity' FROM page AS p WHERE p.page_namespace=6;
Type exit to quit.
sudo shutdown -h now
|
2022-06-27 09:04:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1851770281791687, "perplexity": 11014.590008388914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00374.warc.gz"}
|
https://ask.sagemath.org/question/58411/save-slower-than-raw-computation/
|
# Save slower than raw computation
I have a computation which takes several minutes to run. Using cell magic %%time I get the following timing:
CPU times: user 6min 51s, sys: 29.8 s, total: 7min 21s
Wall time: 7min 28s
I thought it would be nice to not have to recompute the output of the computation (an algebra I) every time I wanted to use it so I saved using:
save(I,'os7_6x1_c',compress=True)
Then when I go to load it, it takes longer to load than to compute:
%%time
CPU times: user 8min 40s, sys: 35 s, total: 9min 15s
|
2022-08-13 02:27:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3880639672279358, "perplexity": 4658.667540859419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00131.warc.gz"}
|
http://hal.in2p3.fr/in2p3-01012002
|
# Measurement of the $t\bar{t}$ production cross-section using $e\mu$ events with $b$-tagged jets in $pp$ collisions at $\sqrt{s}=7$ and 8 TeV with the ATLAS detector
Abstract : The inclusive top quark pair production cross-section has been measured in proton-proton collisions at sqrt(s)=7 TeV and sqrt(s)=8 TeV with the ATLAS experiment at the LHC, using ttbar events with an opposite-charge e-mu pair in the final state. The measurement was performed with the 2011 7 TeV dataset corresponding to an integrated luminosity of 4.6 fb-1 and the 2012 8 TeV dataset of 20.3 fb-1. The numbers of events with exactly one and exactly two b-tagged jets were counted and used to simultaneously determine sigma(ttbar) and the efficiency to reconstruct and b-tag a b-jet from a top quark decay, thereby minimising the associated systematic uncertainties. The cross-section was measured to be: sigma(ttbar)=$182.9\pm3.1\pm4.2\pm3.6\pm3.3$ pb (7 TeV) and sigma(ttbar)=$242.4\pm1.7\pm5.5\pm7.5\pm4.2$ pb (8 TeV), where the four uncertainties arise from data statistics, experimental and theoretical systematic effects, the knowledge of the integrated luminosity and of the LHC beam energy. The results are consistent with recent theoretical QCD calculations at NNLO. Fiducial measurements corresponding to the experimental acceptance of the leptons are also reported, together with the ratio of cross-sections measured at the two centre-of-mass energies. The inclusive cross-section results were used to determine the top quark pole mass via the dependence of the theoretically-predicted cross-section on $m_t^{pole}$, giving a result of $m_t^{pole}=172.9^{+2.5}_{-2.6}$ GeV. By looking for an excess of ttbar production with respect to the QCD prediction, the results were also used to place limits on the pair-production of supersymmetric top squarks with masses close to the top quark mass decaying to predominantly right-handed top quarks and a light neutralino, the lightest supersymmetric particle. Top squarks with masses between the top quark mass and 177 GeV are excluded at the 95% confidence level.
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-01012002
Contributor : Sabine Starita Connect in order to contact the contributor
Submitted on : Wednesday, June 25, 2014 - 11:11:44 AM
Last modification on : Thursday, December 16, 2021 - 2:11:14 PM
### Citation
G. Aad, G. Rahal, S. Abdel Khalek, A. Bassalat, C. Becot, et al.. Measurement of the $t\bar{t}$ production cross-section using $e\mu$ events with $b$-tagged jets in $pp$ collisions at $\sqrt{s}=7$ and 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, Springer Verlag (Germany), 2014, 74, pp.3109. ⟨10.1140/epjc/s10052-014-3109-7⟩. ⟨in2p3-01012002⟩
### Metrics
Les métriques sont temporairement indisponibles
|
2022-01-26 13:55:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8876714110374451, "perplexity": 2710.7029111980446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304954.18/warc/CC-MAIN-20220126131707-20220126161707-00696.warc.gz"}
|
http://gmatclub.com/forum/in-a-certain-neighborhood-there-are-half-as-many-beige-house-146970.html?kudos=1
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 07 Jul 2015, 19:13
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# In a certain neighborhood there are half as many beige house
Author Message
TAGS:
Manager
Joined: 28 May 2009
Posts: 155
Location: United States
Concentration: Strategy, General Management
GMAT Date: 03-22-2013
GPA: 3.57
WE: Information Technology (Consulting)
Followers: 4
Kudos [?]: 126 [0], given: 91
In a certain neighborhood there are half as many beige house [#permalink] 09 Feb 2013, 13:03
00:00
Difficulty:
35% (medium)
Question Stats:
64% (01:47) correct 36% (00:55) wrong based on 84 sessions
In a certain neighborhood there are half as many beige houses as white houses and five times as many white houses as brown houses. What is the ratio of the number of brown houses to the number of beige houses?
(A) 1:10
(B) 1:9
(C) 2:5
(D) 5:2
(E) 10:1
Source: Gmat Hacks 1800
[Reveal] Spoiler: OA
_________________
Kaplan Promo Code Knewton GMAT Discount Codes Veritas Prep GMAT Discount Codes
GMAT Tutor
Joined: 24 Jun 2008
Posts: 1174
Followers: 303
Kudos [?]: 967 [1] , given: 4
Re: In a certain neighborhood there are half as many beige house [#permalink] 10 Feb 2013, 06:37
1
KUDOS
Expert's post
This is a pure ratio question, so there's no harm in picking a number for something if that makes things easier for you. If we have 10 white houses, we have 5 beige houses and 2 brown houses. So the answer is 2:5.
_________________
GMAT Tutor in Toronto
If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 5682
Location: Pune, India
Followers: 1416
Kudos [?]: 7362 [1] , given: 186
Re: In a certain neighborhood there are half as many beige house [#permalink] 27 Feb 2013, 19:52
1
KUDOS
Expert's post
hikaps14 wrote:
megafan wrote:
In a certain neighborhood there are half as many beige houses as white houses and five times as many white houses as brown houses. What is the ratio of the number of brown houses to the number of beige houses?
(A) 1:10
(B) 1:9
(C) 2:5
(D) 5:2
(E) 10:1
Source: Gmat Hacks 1800
I usually do not have any problem with ratios.. but there are few which trouble me a lot. The above one use word 'as many'.
I read the problem and got the answer 5:2. I interpreted "half no. of beige = total no. of white & 5 times the no. of white = total no. of brown'.
The ans suggest otherwise. Could some one explain, how to interpret them correctly.
Thanks,
If I tell you, "I have five times as many problems as you do," what does it mean? Do I have more problems or do you?
I hope you will agree that I have more problems.
Similarly, I have half as many books as you means that I have fewer books.
Now look at the question:
"there are half as many beige houses as white houses" - implies Beige:White = 1:2 (There are fewer Beige houses)
"there are five times as many white houses as brown houses" - implies White:Brown = 5:1 (There are more White houses)
Hence, Beige:White:Brown = 5:10:2
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Director Status: Joined: 24 Jul 2011 Posts: 642 GMAT 1: 780 Q51 V48 GRE 1: 1540 Q800 V740 Followers: 85 Kudos [?]: 350 [0], given: 14 Re: In a certain neighborhood there are half as many beige house [#permalink] 09 Feb 2013, 21:27 Let x be the number of beige houses Then number of white houses = 2x Number of brown houses = 2x/5 Ratio of number of brown houses to number of beige houses = (2x/5)/(x) = 2:5 Option C _________________ GyanOne | http://www.GyanOne.com | +91 9899831738 Manager Joined: 08 Dec 2012 Posts: 67 Location: United Kingdom GMAT 1: 710 Q0 V0 WE: Engineering (Consulting) Followers: 1 Kudos [?]: 69 [0], given: 31 Re: In a certain neighborhood there are half as many beige house [#permalink] 10 Feb 2013, 09:58 $$\frac{1}{2}*Be$$ = $$1*W$$ $$5 * W$$ = $$1*Br$$ => $$\frac{1}{5}*Br$$ So Ration of Br to Be = $$\frac{1}{5}/\frac{1}{2}$$ = $$\frac{2}{5}$$ Answer is C Senior Manager Joined: 10 Apr 2012 Posts: 281 Location: United States Concentration: Technology, Other GPA: 2.44 WE: Project Management (Telecommunications) Followers: 3 Kudos [?]: 266 [0], given: 325 Re: In a certain neighborhood there are half as many beige house [#permalink] 27 Feb 2013, 11:53 nave81 wrote: $$\frac{1}{2}*Be$$ = $$1*W$$ $$5 * W$$ = $$1*Br$$ => $$\frac{1}{5}*Br$$ So Ration of Br to Be = $$\frac{1}{5}/\frac{1}{2}$$ = $$\frac{2}{5}$$ Answer is C Nave 81 - little correction 1/2 White =Be.House & white house =5 brown house . translation : if there are 2 brown houses , there will be 10 white house : hence W.H=5 Brown House therefore : 5/2 brown house =be.House => brown house/be.House =2/5 Intern Joined: 28 Jan 2013 Posts: 8 Location: United States Concentration: Strategy, Technology GMAT Date: 04-20-2013 GPA: 3.2 WE: Analyst (Computer Software) Followers: 0 Kudos [?]: 11 [0], given: 4 Re: In a certain neighborhood there are half as many beige house [#permalink] 27 Feb 2013, 14:38 megafan wrote: In a certain neighborhood there are half as many beige houses as white houses and five times as many white houses as brown houses. What is the ratio of the number of brown houses to the number of beige houses? (A) 1:10 (B) 1:9 (C) 2:5 (D) 5:2 (E) 10:1 Source: Gmat Hacks 1800 I usually do not have any problem with ratios.. but there are few which trouble me a lot. The above one use word 'as many'. I read the problem and got the answer 5:2. I interpreted "half no. of beige = total no. of white & 5 times the no. of white = total no. of brown'. The ans suggest otherwise. Could some one explain, how to interpret them correctly. Thanks, Intern Joined: 28 Jan 2013 Posts: 8 Location: United States Concentration: Strategy, Technology GMAT Date: 04-20-2013 GPA: 3.2 WE: Analyst (Computer Software) Followers: 0 Kudos [?]: 11 [0], given: 4 Re: In a certain neighborhood there are half as many beige house [#permalink] 28 Feb 2013, 11:42 VeritasPrepKarishma wrote: If I tell you, "I have five times as many problems as you do," what does it mean? Do I have more problems or do you? I hope you will agree that I have more problems. Similarly, I have half as many books as you means that I have fewer books. Now look at the question: "there are half as many beige houses as white houses" - implies Beige:White = 1:2 (There are fewer Beige houses) "there are five times as many white houses as brown houses" - implies White:Brown = 5:1 (There are more White houses) Hence, Beige:White:Brown = 5:10:2 Thanks for your reply. The moment you used the missing word 'times'. The problem was crystal clear. I will always assume the times word if not mentioned. ex : 'half times' Thanks karishma Intern Joined: 04 Oct 2012 Posts: 4 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: In a certain neighborhood there are half as many beige house [#permalink] 28 Feb 2013, 16:50 VeritasPrepKarishma wrote: hikaps14 wrote: megafan wrote: In a certain neighborhood there are half as many beige houses as white houses and five times as many white houses as brown houses. What is the ratio of the number of brown houses to the number of beige houses? (A) 1:10 (B) 1:9 (C) 2:5 (D) 5:2 (E) 10:1 Source: Gmat Hacks 1800 I usually do not have any problem with ratios.. but there are few which trouble me a lot. The above one use word 'as many'. I read the problem and got the answer 5:2. I interpreted "half no. of beige = total no. of white & 5 times the no. of white = total no. of brown'. The ans suggest otherwise. Could some one explain, how to interpret them correctly. Thanks, If I tell you, "I have five times as many problems as you do," what does it mean? Do I have more problems or do you? I hope you will agree that I have more problems. Similarly, I have half as many books as you means that I have fewer books. Now look at the question: "there are half as many beige houses as white houses" - implies Beige:White = 1:2 (There are fewer Beige houses) "there are five times as many white houses as brown houses" - implies White:Brown = 5:1 (There are more White houses) Hence, Beige:White:Brown = 5:10:2 I still don't get it . Are we multiplying both of the numbers in White to get the 10? Intern Joined: 28 Jan 2013 Posts: 8 Location: United States Concentration: Strategy, Technology GMAT Date: 04-20-2013 GPA: 3.2 WE: Analyst (Computer Software) Followers: 0 Kudos [?]: 11 [0], given: 4 Re: In a certain neighborhood there are half as many beige house [#permalink] 28 Feb 2013, 17:54 dananyc wrote: I still don't get it . Are we multiplying both of the numbers in White to get the 10? Karishma , just added a kudos for your reply. As karishma stated : "there are half as many beige houses as white houses" - implies Beige:White = 1:2 (There are fewer Beige houses) "there are five times as many white houses as brown houses" - implies White:Brown = 5:1 (There are more White houses) Hence, Beige:White:Brown = 5:10:2 She made the ratios common above Be : Wi = 1:2 (same as 5:10) Wi : br = 5:1 (same as 10:2) Once the white ratio is made common in above 2 equations then we can merge the 2 as Be: wi : Br = 5:10:2 Another way to look at Be/Wi = 1/2 ( Wi = 2 Be) Wi/Br = 5/1 ( Wi = 5 Br) so we 2 Be = 5 Br , Br/ Be = 2/5. I hope I made some sense. Moderator Joined: 01 Sep 2010 Posts: 2634 Followers: 470 Kudos [?]: 3646 [0], given: 726 Re: In a certain neighborhood there are half as many beige house [#permalink] 28 Feb 2013, 19:01 Expert's post It is not difficult to concentrate on the stem and think: white is $$20$$, brown is $$\frac{1}{5}$$ of white so $$4$$ and beige is $$half$$ of white so $$10$$ $$Brown / beige$$ $$=$$$$\frac{4}{10}$$ $$=$$ $$\frac{2}{5}$$ C 30 seconds. that's it. no formula, no bother _________________ Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 5682 Location: Pune, India Followers: 1416 Kudos [?]: 7362 [0], given: 186 Re: In a certain neighborhood there are half as many beige house [#permalink] 28 Feb 2013, 19:26 Expert's post hikaps14 wrote: dananyc wrote: I still don't get it . Are we multiplying both of the numbers in White to get the 10? Karishma , just added a kudos for your reply. As karishma stated : "there are half as many beige houses as white houses" - implies Beige:White = 1:2 (There are fewer Beige houses) "there are five times as many white houses as brown houses" - implies White:Brown = 5:1 (There are more White houses) Hence, Beige:White:Brown = 5:10:2 She made the ratios common above Be : Wi = 1:2 (same as 5:10) Wi : br = 5:1 (same as 10:2) Once the white ratio is made common in above 2 equations then we can merge the 2 as Be: wi : Br = 5:10:2 Another way to look at Be/Wi = 1/2 ( Wi = 2 Be) Wi/Br = 5/1 ( Wi = 5 Br) so we 2 Be = 5 Br , Br/ Be = 2/5. I hope I made some sense. To add to what hikaps14 said, ratio between two numbers is nothing but the relation between them. Beige:White = 1:2 means for every one Beige, there are two Whites White: Brown = 5:1 means for every 5 Whites, there is 1 Brown. So what is the relation between Beige and Brown? We don't know because the numbers of whites are not comparable. So what do we do? We make the Whites comparable i.e. we make them same. Beige:White = 1:2 = 5:10 (ratio remains the same if you multiply each term by the same number) means for every 5 Beige, there are 10 Whites. White: Brown = 5:1 = 10:2 means for every 10 WHites, there are 2 Browns. Now we can say that for every 5 Beige, there are 10 Whites and for every 10 Whites, there are 2 Browns. SO for every 5 Beige, there are 2 Browns. Beige:Brown = 5:2 This is how you manipulate ratios. It is useful to know. This question is best solved taking numbers though (as done by Ian above) _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Intern
Joined: 04 Oct 2012
Posts: 4
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: In a certain neighborhood there are half as many beige house [#permalink] 04 Mar 2013, 18:08
Thanks very much!
Intern
Joined: 17 May 2013
Posts: 49
GMAT Date: 10-23-2013
Followers: 0
Kudos [?]: 6 [0], given: 8
Re: In a certain neighborhood there are half as many beige house [#permalink] 25 Aug 2013, 03:04
Assume number of White houses as 10, Beige is half as many and White is 5 times Brown
Beige : White : Brown
5 : 10 : 2
Ratio of Brown to beige would be 2 : 5
Ans : C
Re: In a certain neighborhood there are half as many beige house [#permalink] 25 Aug 2013, 03:04
Similar topics Replies Last post
Similar
Topics:
4 One water pump can fill half of a certain empty tank in 3 hours. Anoth 6 04 Jan 2012, 09:17
5 At a certain college there are twice as many english majors 6 22 Oct 2009, 14:11
On a certain tour, half of the men are married, and the 9 18 Jun 2008, 11:52
A certain kennel will house 24 dogs for 7 days. Each dog 3 26 Feb 2007, 11:43
8 On Monday, a certain animal shelter housed 55 cats and dogs. 7 16 Feb 2007, 04:03
Display posts from previous: Sort by
|
2015-07-08 03:13:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.444945752620697, "perplexity": 7430.847729041205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375635604.22/warc/CC-MAIN-20150627032715-00022-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://blog.assafrinot.com/?tag=03e02
|
# Tag Archives: 03E02
## Rectangular square-bracket operation for successor of regular cardinals
Joint work with Stevo Todorcevic. Extended Abstract: Consider the coloring statement $\lambda^+\nrightarrow[\lambda^+;\lambda^+]^2_{\lambda^+}$ for a given regular cardinal $\lambda$: In 1990, Shelah proved the above for $\lambda>2^{\aleph_0}$; In 1991, Shelah proved the above for $\lambda>\aleph_1$; In 1997, Shelah proved the above … Continue reading
## Transforming rectangles into squares, with applications to strong colorings
Abstract: It is proved that every singular cardinal $\lambda$ admits a function $\textbf{rts}:[\lambda^+]^2\rightarrow[\lambda^+]^2$ that transforms rectangles into squares. That is, whenever $A,B$ are cofinal subsets of $\lambda^+$, we have $\textbf{rts}[A\circledast B]\supseteq C\circledast C$, for some cofinal subset $C\subseteq\lambda^+$. As a … Continue reading
|
2015-03-03 04:38:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495663046836853, "perplexity": 1188.4513006049099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463108.89/warc/CC-MAIN-20150226074103-00213-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2334230/difference-in-logic-notations-for-maths-and-computer-science
|
# Difference in logic notations for maths and computer science
I am reading: How to prove it, A structured approach by Daniel J.Velleman and despite it being a book based on both maths and computer science all logic statements seems to be in the "maths logic" notation like so: $$\lor, \land, \lnot$$ However in all my computer science classes we used "computing logic" notation like so: $$+,\cdot,\overline{A}$$ There is no explanation of when, and which, of these notations are to be used in different scenarios in the book. I personally prefer the "maths logic notation" and my best guess is that the reason it seems to be used more is that it is less ambiguous as it uses its own symbols. If this is the case is there any real reason for the "computing logic notation" to be used, or is it just used due to tradition?
Also as a side note my book stated that in maths $or$ is considered to be iclusive is there therefore a separate sign for exclusive $or$ much like $\oplus$ in computing notation?
• $\veebar$ is used for xor sometimes, IIRC. – Chappers Jun 23 '17 at 23:42
• Sonnym Yes you are correct. in math, (and in logic), "or" is taken to be the inclusive or, just like + is in computing logic. In logic, instead of seeing $\oplus$ (used extensively in computer/boolean logic), we usually simply define the exclusive "or" explicitly: $p\oplus q \equiv$ $$(p \lor q) \land \lnot (p \land q)$$ – Namaste Jun 23 '17 at 23:46
• In general: $a+b$ corresponds to $a\lor b$, $a \cdot b = ab$ corresponds to $a \land b$, $\bar{A} = A' = \lnot A$. – Namaste Jun 23 '17 at 23:51
• It really splits off in terms of strict boolean logic, and the logic used in math and in logic. The equivalences remain, but used for different purposes. I know best from the math/logic perspective. Similarly, in boolean logic, true is represented by $1$ and false by $0$, whereas in logic and math, we are more likely to use true, T, for true, and F or false, for false. – Namaste Jun 23 '17 at 23:56
• Computer science is a big field. Most CS papers I read don't use your "computing logic" notation. Admittedly, I'm primarily interested in programming language theory which is closely related to type theory and mathematical logic. Even then there's a smörgåsbord of notations. Similarly mathematical logic has a smörgåsbord on notations including ones similar to your "computing logic" notation. It's simply not the case that logic notation is standardized even within a subfield. – Derek Elkins Jun 24 '17 at 0:05
If you observe, the computer science notation is similar to regular algebra notation: addition signs and multiplication signs. It becomes convenient to use when 'or' doesn't really represent a logical 'or' but a bitwise 'or', sometimes just called 'addition' (without carries):
10010101
+ 00100101
= 10110101
'Multiplication' a.k.a. 'and' works similarly. The main difference is that the computer science notation is used more often in the context of performing this special type of arithmetic, or dealing with Boolean algebra expressions like $AB + \bar{A}C$, because it is easier to think of them that way. (E.g. There are digital circuits called 'adders'.)
The mathematical notation of $\land$, $\lor$, $\lnot$ is used exclusively for logical statements like $P(x) \land \lnot(Q(x) \lor R(x))$. I have almost never seen something like $10010101 \lor 00100101$ and it would be unintuitive to write it that way.
This duality is quite interesting when you notice that the two notations are actually talking about the same thing!
• I've pretty much universally seen $\wedge, \vee, \lnot$ used as the operations in a Boolean algebra, which would include examples such as bit strings of a certain length. – Daniel Schepler Jun 24 '17 at 0:10
• @DanielSchepler Perhaps that is because I have only dealt with Boolean algebra in a digital electronics context, where it is almost always written like my example. – shardulc Jun 24 '17 at 0:15
• Again, dear @DanielSchepler this is a question about logic notation, not of boolean algebra. – Namaste Jun 24 '17 at 0:43
• It really comes down to one thing : Where are the $\lor, \land, \lnot$ keys on your computer keyboard? (v,^ don't cut it.) Information Technology text authors just prefer easily typeable notations over ASC][ or markup text. – Graham Kemp Jun 24 '17 at 7:28
• It also helps to know how computer languages deal with it. For example, the C family uses &, |, ^, ! for .and., .or., .xor., .not. (with variations for bitwise vs. boolean). – amI Jun 13 '18 at 23:09
|
2019-08-22 20:21:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.779171884059906, "perplexity": 759.1804353901443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317359.75/warc/CC-MAIN-20190822194105-20190822220105-00508.warc.gz"}
|
http://polyhedra.mathmos.net/entry/truncatedcube.html
|
# The Truncated Cube
A semi-regular polyhedron with two octagons and a triangle meeting at each vertex.
The truncated cube is one of the thirteen archimedian solids. It can be created by slicing suitable sections off the vertices of either a cube or an octahedron and thus may be inscribed in either solid.
VertexSymbol WythoffSymbol No. ofVertices No. ofEdges {3}Faces {8}Faces SymmetryGroup Dual Polyhedron 3.8.8 2 3 | 4 24 36 8 6 S4×C2 Triakis Octahedron
Edge ratios:
• e/rho = 2 - sqrt(2)
• e/R = (2 sqrt(7-4sqrt(2)))/sqrt(17)
• e/(e_6) = sqrt(2) - 1
• e/(e_8) = 3sqrt(2) - 4
where e is the edge length, rho is the inter-radius, R is the circum-radius, e_6 is the edge of the circumscribing cube, and e_8 is the edge of the circumscribing octahedron.
|
2017-07-24 04:30:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5794022083282471, "perplexity": 4091.0298609139686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424721.74/warc/CC-MAIN-20170724042246-20170724062246-00591.warc.gz"}
|
https://research.wu.ac.at/en/publications/convex-projection-and-convex-vector-optimization-3
|
# Convex Projection and Convex Vector Optimization
Publication: Scientific journalJournal articlepeer-review
## Abstract
In this paper we consider a problem, called convex projection, of projecting a convex set
onto a subspace. We will show that to a convex projection one can assign a particular multiobjective
convex optimization problem, such that the solution to that problem also solves the
convex projection (and vice versa), which is analogous to the result in the polyhedral convex
case considered in Löhne and Weißing (Math Methods Oper Res 84(2):411–426, 2016). In
practice, however, one can only compute approximate solutions in the (bounded or selfbounded)
convex case, which solve the problem up to a given error tolerance. We will show
that for approximate solutions a similar connection can be proven, but the tolerance level needs
to be adjusted. That is, an approximate solution of the convex projection solves the multiobjective
problem only with an increased error. Similarly, an approximate solution of the
multi-objective problem solves the convex projection with an increased error. In both cases the
tolerance is increased proportionally to amultiplier. Thesemultipliers are deduced and shown
to be sharp. These results allow to compute approximate solutions to a convex projection
problem by computing approximate solutions to the corresponding multi-objective convex
optimization problem, for which algorithms exist in the bounded case. For completeness, we
will also investigate the potential generalization of the following result to the convex case. In
Löhne and Weißing (Math Methods Oper Res 84(2):411–426, 2016), it has been shown for
the polyhedral case, how to construct a polyhedral projection associated to any given vector
linear program and how to relate their solutions. This in turn yields an equivalence between
polyhedral projection, multi-objective linear programming and vector linear programming.
We will show that only some parts of this result can be generalized to the convex case, and
discuss the limitations.
Original language English 301 - 327 Journal of Global Optimization 83 https://doi.org/10.1007/s10898-021-01111-1 Published - 2022
## Austrian Classification of Fields of Science and Technology (ÖFOS)
• 101024 Probability theory
• 101007 Financial mathematics
• 502009 Corporate finance
|
2022-08-16 10:51:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8157575130462646, "perplexity": 1383.210841332721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00772.warc.gz"}
|
https://socratic.org/questions/how-do-you-write-10-5-in-decimal-form
|
How do you write 10^-5 in decimal form?
Oct 9, 2015
${10}^{- 5} = 0.00001$
Explanation:
By definition of negative power, ${A}^{- n} = \frac{1}{{A}^{n}}$
Therefore, by definition, ${10}^{- 5} = \frac{1}{{10}^{5}}$
Obviously,
$\frac{1}{{10}^{5}} = \frac{1}{100000} = 0.00001$
Hence, ${10}^{- 5} = 0.00001$
|
2020-02-27 06:21:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8945646286010742, "perplexity": 7200.356732704552}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00272.warc.gz"}
|
https://gigaom.com/2014/01/05/belkin-expands-wemo-line-with-connected-lightbulbs-and-wi-fi-based-presence/
|
# Belkin expands WeMo line with connected lightbulbs and Wi-Fi-based presence
Belkin has a few cool new products up its sleeve for CES, with three that have cool implications for the smart home and internet of things and one that’s good news for router nerds. Three are products and one is a capability it’s adding to its Linksys line of routers that will use WeMo devices and the router to determine where you are in the home.
On the product side, the company is adding connected LED light bulbs to its WeMo line of connected devices. For $129.99 you get two dimmable LED light bulbs and a bridge component that will connect over your home’s WiFi network. Additional bulbs are$39.99. These won’t change color but are a bit cheaper than the Philips Hue connected bulbs. You can manage up to 50 lights on one bridge.
This means WeMo now has connected bulbs, outlets and wall switches giving consumers a lot of options when it comes to connecting their lighting. Ironically I’ve found that in using my Hue connected lights in my living room is somewhat tough because I tend to turn the lights off at the switch by habit. I almost want to add a connected switch so when I’m playing with the lights in the app, I don’t have to get up and flip the switch. But that seems ridiculous.
The second news element is that the Belkin deal with Jardin foods ( maker of Crock-Pots and other kitchen appliances) has yielded a \$99.99 connected WeMo slow cooker. This is not the refrigerated slow cooker of my dreams, but it’s a start. It lets people control the temperature of their food remotely.
The other product news is interesting, because Belkin is reaching out to the maker community with a box called the WeMo Maker, which adds connectivity to any device controlled with a DC switch. This could be a garage door, robotics projects, motors, sprinkler systems and other DC powered devices. The WeMo Maker is a module you wire into the device including sensors. WeMo Maker, which will be out in September will also work with IFTTT. Pricing for that isn’t out yet, but it’s the most interesting element of the WeMo ecosystem from my perspective.
Finally, Belkin is bringing back the old-school design of the Linksys WRT54G router (see below) with features people need for today’s far more complicated networks. Linksys is also working with the OpenWRT community to make an open source firmware downloadable for the new router when it is available, which should be a nice addition for those that like hacking their gear.
The router should be available in the Spring. Some of the features that seem worthwhile, are network maps that will let consumers see all the devices (like all those WeMo gizmos) on their network and will let them control those devices and their network access from an administrative portal. For example, you could block your child’s internet connection on all of their portable devices after a certain time.
Also cool is something Belkin CEO Chet Pipkin, said the company was working on that would use WeMo products and the Linksys routers to offer users presence detection in the home. That’s not here yet, but it’s something Pipkin and I discussed on our Dec. 31 podcast. That adds another technical method to Bluetooth beacons, motion detection and microphones as a way to track where people are in the home as we attempt to make them even smarter.
For my money, the better routers, smarter Wi-Fi and the WeMo Maker are probably the coolest stuff Belkin has going on related to the smart home at the moment.
|
2021-07-26 16:55:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19106994569301605, "perplexity": 3161.7905132159945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00185.warc.gz"}
|
https://math.stackexchange.com/questions/2982370/applying-rules-of-algebra-when-working-with-multiplication-and-exponents
|
# Applying rules of algebra when working with multiplication and exponents
I'm taking an online course and help is hard to find. This specific problem has to do with recurrence relation. I apologize for being too general but I'm just looking for help in how to go about solving this problem.
The entire problem is:
Let $$d_0,d_1,d_2,\ldots$$ be defined by the formula $$d_n=3^n-2^n$$ for all integers $$n\ge0$$. Show that this sequence satisfies the recurrence relation: $$d_k=5d_{k-1}-6d_{k-2}$$.
The step I can do without any trouble is finding the statements that represent the values we're dealing with in the relation, $$d_{k-1}$$ and $$d_{k-2}$$:
$$d_k=3^k-2^k$$
$$d_{k-1}=3^{k-1}-2^{k-1}$$
$$d_{k-2}=3^{k-2}-2^{k-2}$$
However, when it comes time to plugging that into the relation and simplifying it down to the original definition of $$d_k=3^k-2^k$$ I fail miserably.
$$d_k=5d_{k-1}-6d_{k-2}$$
$$=5\left(3^{k-1}-2^{k-1}\right)-6\left(3^{k-2}-2^{k-2}\right)$$
$$=5\cdot3^{k-1}-5\cdot2^{k-1}-6\cdot3^{k-2}+6\cdot2^{k-2}$$
$$=5\cdot\frac{3^k}{3}-6\cdot\frac{3^k}{3^2}-5\cdot\frac{2^k}{2}+6\cdot\frac{2^k}{2^2}$$
Is that a decent start? Is there a better way to go, like breaking everything down into their simplest components, like so:
$$d_k=5d_{k-1}-6d_{k-2}$$
$$=(2+3)\left(3^{k-1}-2^{k-1}\right)-(2\cdot3)\left(3^{k-2}-2^{k-2}\right)$$
$$=2\cdot\frac{3^k}{3}-2\cdot\frac{2^k}{2}+3\cdot\frac{3^k}{3}-3\cdot\frac{2^k}{2}-\left(\left(2\cdot\frac{3^k}{3^2}-2\cdot\frac{2^k}{2^2}\right)-\left(3\cdot\frac{3^k}{3^2}-3\cdot\frac{2^k}{2^2}\right)\right)$$
But then what?
If I go either of these routes, I get stuck. I feel I have two issues: a.) identifying the way to go that seem the most logical; and, b.) working towards a solution. I don't know if I missed a big chunk in Algebra or if my brain just doesn't see what's going on.
What am I missing? Are either of these steps valid things to try? What are some general rules to follow to work these out?
Also, what specific discipline of Algebra is this? I don't think my course is introducing us to this stuff. I think it assumes we already know how to work these out.
• Have you tried to factor $3^{k}$ and $2^{k}$? – IEDC PHY Nov 3 '18 at 0:15
• While I appreciate your comment, I wasn't sure which line above to attempt to factor them out. – harperville Nov 3 '18 at 1:16
I start from what you wrote down: $$5d_{k-1}-6d_{k-2}=5\cdot3^{k-1}-6\cdot3^{k-2}-5\cdot2^{k-1}+6\cdot2^{k-2}$$ and I want to get to $$d_k=3^k-2^k$$.
I rewrite $$3^{k-1}=3\cdot3^{k-2}$$, and similarly for $$2^{k-1}$$: $$5\cdot3\cdot3^{k-2}-6\cdot3^{k-2}-5\cdot2\cdot2^{k-2}+6\cdot2^{k-2}$$ $$=15\cdot3^{k-2}-6\cdot3^{k-2}-10\cdot2^{k-2}+6\cdot2^{k-2}$$ I collect $$3^{k-2}$$ and $$2^{k-2}$$: $$=(15-6)3^{k-2}-(10-6)2^{k-2}=9\cdot3^{k-2}-4\cdot2^{k-2}=3^2\cdot3^{k-2}-2^2\cdot2^{k-2}$$ From here I collapse the exponents to get the desired result. $$=3^k-2^k$$
|
2019-06-20 19:03:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7596885561943054, "perplexity": 257.00666692465757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.24/warc/CC-MAIN-20190620190041-20190620212041-00264.warc.gz"}
|
http://adam.chlipala.net/cpdt/repo/file/5cdfbf56afbe/src/LogicProg.v
|
### view src/LogicProg.v @ 327:5cdfbf56afbe
Cope with coqdoc bug
line wrap: on
line source
(* Copyright (c) 2011, Adam Chlipala
*
* Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
* The license text is available at:
*)
(* begin hide *)
Require Import List.
Require Import CpdtTactics.
Set Implicit Arguments.
(* end hide *)
(** %\part{Proof Engineering}
\chapter{Proof Search by Logic Programming}% *)
(** Exciting new chapter that is missing prose for the new content! Some content was moved from the next chapter, and it may not seem entirely to fit here yet. *)
(** * Introducing Logic Programming *)
Print plus.
Inductive plusR : nat -> nat -> nat -> Prop :=
| PlusO : forall m, plusR O m m
| PlusS : forall n m r, plusR n m r
-> plusR (S n) m (S r).
(* begin thide *)
Hint Constructors plusR.
(* end thide *)
Theorem plus_plusR : forall n m,
plusR n m (n + m).
(* begin thide *)
induction n; crush.
Qed.
(* end thide *)
Theorem plusR_plus : forall n m r,
plusR n m r
-> r = n + m.
(* begin thide *)
induction 1; crush.
Qed.
(* end thide *)
Example four_plus_three : 4 + 3 = 7.
(* begin thide *)
reflexivity.
Qed.
(* end thide *)
Example four_plus_three' : plusR 4 3 7.
(* begin thide *)
auto.
Qed.
(* end thide *)
Example five_plus_three' : plusR 5 3 8.
(* begin thide *)
auto 6.
Restart.
info auto 6.
Qed.
(* end thide *)
(* begin thide *)
Hint Constructors ex.
(* end thide *)
Example seven_minus_three : exists x, x + 3 = 7.
(* begin thide *)
eauto 6.
Abort.
(* end thide *)
Example seven_minus_three' : exists x, plusR x 3 7.
(* begin thide *)
info eauto 6.
Qed.
(* end thide *)
Example seven_minus_four' : exists x, plusR 4 x 7.
(* begin thide *)
info eauto 6.
Qed.
(* end thide *)
(* begin thide *)
SearchRewrite (O + _).
Hint Immediate plus_O_n.
Lemma plusS : forall n m r,
n + m = r
-> S n + m = S r.
crush.
Qed.
Hint Resolve plusS.
(* end thide *)
Example seven_minus_three : exists x, x + 3 = 7.
(* begin thide *)
info eauto 6.
Qed.
(* end thide *)
Example seven_minus_four : exists x, 4 + x = 7.
(* begin thide *)
info eauto 6.
Qed.
(* end thide *)
Example hundred_minus_hundred : exists x, 4 + x + 0 = 7.
(* begin thide *)
eauto 6.
Abort.
(* end thide *)
(* begin thide *)
Lemma plusO : forall n m,
n = m
-> n + 0 = m.
crush.
Qed.
Hint Resolve plusO.
(* end thide *)
Example seven_minus_four_zero : exists x, 4 + x + 0 = 7.
(* begin thide *)
info eauto 7.
Qed.
(* end thide *)
Check eq_trans.
Section slow.
Hint Resolve eq_trans.
Example three_minus_four_zero : exists x, 1 + x = 0.
Time eauto 1.
Time eauto 2.
Time eauto 3.
Time eauto 4.
Time eauto 5.
debug eauto 3.
Abort.
End slow.
(* begin thide *)
Hint Resolve eq_trans : slow.
(* end thide *)
Example three_minus_four_zero : exists x, 1 + x = 0.
(* begin thide *)
eauto.
Abort.
(* end thide *)
Example seven_minus_three_again : exists x, x + 3 = 7.
(* begin thide *)
eauto 6.
Qed.
(* end thide *)
Example needs_trans : forall x y, 1 + x = y
-> y = 2
-> exists z, z + x = 3.
(* begin thide *)
info eauto with slow.
Qed.
(* end thide *)
(** * Searching for Underconstrained Values *)
Print length.
Example length_1_2 : length (1 :: 2 :: nil) = 2.
auto.
Qed.
Print length_1_2.
(* begin thide *)
Theorem length_O : forall A, length (nil (A := A)) = O.
crush.
Qed.
Theorem length_S : forall A (h : A) t n,
length t = n
-> length (h :: t) = S n.
crush.
Qed.
Hint Resolve length_O length_S.
(* end thide *)
Example length_is_2 : exists ls : list nat, length ls = 2.
(* begin thide *)
eauto.
Show Proof.
Abort.
(* end thide *)
Print Forall.
Example length_is_2 : exists ls : list nat, length ls = 2
/\ Forall (fun n => n >= 1) ls.
(* begin thide *)
eauto 9.
Qed.
(* end thide *)
Definition sum := fold_right plus O.
(* begin thide *)
Lemma plusO' : forall n m,
n = m
-> 0 + n = m.
crush.
Qed.
Hint Resolve plusO'.
Hint Extern 1 (sum _ = _) => simpl.
(* end thide *)
Example length_and_sum : exists ls : list nat, length ls = 2
/\ sum ls = O.
(* begin thide *)
eauto 7.
Qed.
(* end thide *)
Print length_and_sum.
Example length_and_sum' : exists ls : list nat, length ls = 5
/\ sum ls = 42.
(* begin thide *)
eauto 15.
Qed.
(* end thide *)
Print length_and_sum'.
Example length_and_sum'' : exists ls : list nat, length ls = 2
/\ sum ls = 3
/\ Forall (fun n => n <> 0) ls.
(* begin thide *)
eauto 11.
Qed.
(* end thide *)
Print length_and_sum''.
(** * Synthesizing Programs *)
Inductive exp : Set :=
| Const : nat -> exp
| Var : exp
| Plus : exp -> exp -> exp.
Inductive eval (var : nat) : exp -> nat -> Prop :=
| EvalConst : forall n, eval var (Const n) n
| EvalVar : eval var Var var
| EvalPlus : forall e1 e2 n1 n2, eval var e1 n1
-> eval var e2 n2
-> eval var (Plus e1 e2) (n1 + n2).
(* begin thide *)
Hint Constructors eval.
(* end thide *)
Example eval1 : forall var, eval var (Plus Var (Plus (Const 8) Var)) (var + (8 + var)).
(* begin thide *)
auto.
Qed.
(* end thide *)
Example eval1' : forall var, eval var (Plus Var (Plus (Const 8) Var)) (2 * var + 8).
(* begin thide *)
eauto.
Abort.
(* end thide *)
(* begin thide *)
Theorem EvalPlus' : forall var e1 e2 n1 n2 n, eval var e1 n1
-> eval var e2 n2
-> n1 + n2 = n
-> eval var (Plus e1 e2) n.
crush.
Qed.
Hint Resolve EvalPlus'.
Hint Extern 1 (_ = _) => abstract omega.
(* end thide *)
Example eval1' : forall var, eval var (Plus Var (Plus (Const 8) Var)) (2 * var + 8).
(* begin thide *)
eauto.
Qed.
(* end thide *)
Print eval1'.
Example synthesize1 : exists e, forall var, eval var e (var + 7).
(* begin thide *)
eauto.
Qed.
(* end thide *)
Print synthesize1.
Example synthesize2 : exists e, forall var, eval var e (2 * var + 8).
(* begin thide *)
eauto.
Qed.
(* end thide *)
Print synthesize2.
Example synthesize3 : exists e, forall var, eval var e (3 * var + 42).
(* begin thide *)
eauto.
Qed.
(* end thide *)
Print synthesize3.
(* begin thide *)
Theorem EvalConst' : forall var n m, n = m
-> eval var (Const n) m.
crush.
Qed.
Hint Resolve EvalConst'.
Theorem zero_times : forall n m r,
r = m
-> r = 0 * n + m.
crush.
Qed.
Hint Resolve zero_times.
Theorem EvalVar' : forall var n,
var = n
-> eval var Var n.
crush.
Qed.
Hint Resolve EvalVar'.
Theorem plus_0 : forall n r,
r = n
-> r = n + 0.
crush.
Qed.
Theorem times_1 : forall n, n = 1 * n.
crush.
Qed.
Hint Resolve plus_0 times_1.
Require Import Arith Ring.
Theorem combine : forall x k1 k2 n1 n2,
(k1 * x + n1) + (k2 * x + n2) = (k1 + k2) * x + (n1 + n2).
intros; ring.
Qed.
Hint Resolve combine.
Theorem linear : forall e, exists k, exists n,
forall var, eval var e (k * var + n).
induction e; crush; eauto.
Qed.
Print linear.
(* end thide *)
(** * More on [auto] Hints *)
(** Another class of built-in tactics includes [auto], [eauto], and [autorewrite]. These are based on %\textit{%#<i>#hint databases#</i>#%}%, which we have seen extended in many examples so far. These tactics are important, because, in Ltac programming, we cannot create %%#"#global variables#"#%''% whose values can be extended seamlessly by different modules in different source files. We have seen the advantages of hints so far, where [crush] can be defined once and for all, while still automatically applying the hints we add throughout developments.
The basic hints for [auto] and [eauto] are [Hint Immediate lemma], asking to try solving a goal immediately by applying a lemma and discharging any hypotheses with a single proof step each; [Resolve lemma], which does the same but may add new premises that are themselves to be subjects of nested proof search; [Constructors type], which acts like [Resolve] applied to every constructor of an inductive type; and [Unfold ident], which tries unfolding [ident] when it appears at the head of a proof goal. Each of these [Hint] commands may be used with a suffix, as in [Hint Resolve lemma : my_db]. This adds the hint only to the specified database, so that it would only be used by, for instance, [auto with my_db]. An additional argument to [auto] specifies the maximum depth of proof trees to search in depth-first order, as in [auto 8] or [auto 8 with my_db]. The default depth is 5.
All of these [Hint] commands can be issued alternatively with a more primitive hint kind, [Extern]. A few examples should do best to explain how [Hint Extern] works. *)
Theorem bool_neq : true <> false.
(* begin thide *)
auto.
(** [crush] would have discharged this goal, but the default hint database for [auto] contains no hint that applies. *)
Abort.
(** It is hard to come up with a [bool]-specific hint that is not just a restatement of the theorem we mean to prove. Luckily, a simpler form suffices. *)
Hint Extern 1 (_ <> _) => congruence.
Theorem bool_neq : true <> false.
auto.
Qed.
(* end thide *)
(** Our hint says: %%#"#whenever the conclusion matches the pattern [_ <> _], try applying [congruence].#"#%''% The [1] is a cost for this rule. During proof search, whenever multiple rules apply, rules are tried in increasing cost order, so it pays to assign high costs to relatively expensive [Extern] hints.
[Extern] hints may be implemented with the full Ltac language. This example shows a case where a hint uses a [match]. *)
Section forall_and.
Variable A : Set.
Variables P Q : A -> Prop.
Hypothesis both : forall x, P x /\ Q x.
Theorem forall_and : forall z, P z.
(* begin thide *)
crush.
(** [crush] makes no progress beyond what [intros] would have accomplished. [auto] will not apply the hypothesis [both] to prove the goal, because the conclusion of [both] does not unify with the conclusion of the goal. However, we can teach [auto] to handle this kind of goal. *)
Hint Extern 1 (P ?X) =>
match goal with
| [ H : forall x, P x /\ _ |- _ ] => apply (proj1 (H X))
end.
auto.
Qed.
(* end thide *)
(** We see that an [Extern] pattern may bind unification variables that we use in the associated tactic. [proj1] is a function from the standard library for extracting a proof of [R] from a proof of [R /\ S]. *)
End forall_and.
(** After our success on this example, we might get more ambitious and seek to generalize the hint to all possible predicates [P].
[[
Hint Extern 1 (?P ?X) =>
match goal with
| [ H : forall x, P x /\ _ |- _ ] => apply (proj1 (H X))
end.
]]
Coq's [auto] hint databases work as tables mapping %\textit{%#<i>#head symbols#</i>#%}% to lists of tactics to try. Because of this, the constant head of an [Extern] pattern must be determinable statically. In our first [Extern] hint, the head symbol was [not], since [x <> y] desugars to [not (eq x y)]; and, in the second example, the head symbol was [P].
This restriction on [Extern] hints is the main limitation of the [auto] mechanism, preventing us from using it for general context simplifications that are not keyed off of the form of the conclusion. This is perhaps just as well, since we can often code more efficient tactics with specialized Ltac programs, and we will see how in the next chapter. *)
(** * Rewrite Hints *)
(** We have used [Hint Rewrite] in many examples so far. [crush] uses these hints by calling [autorewrite]. Our rewrite hints have taken the form [Hint Rewrite lemma : cpdt], adding them to the [cpdt] rewrite database. This is because, in contrast to [auto], [autorewrite] has no default database. Thus, we set the convention that [crush] uses the [cpdt] database.
This example shows a direct use of [autorewrite]. *)
Section autorewrite.
Variable A : Set.
Variable f : A -> A.
Hypothesis f_f : forall x, f (f x) = f x.
Hint Rewrite f_f : my_db.
Lemma f_f_f : forall x, f (f (f x)) = f x.
intros; autorewrite with my_db; reflexivity.
Qed.
(** There are a few ways in which [autorewrite] can lead to trouble when insufficient care is taken in choosing hints. First, the set of hints may define a nonterminating rewrite system, in which case invocations to [autorewrite] may not terminate. Second, we may add hints that %%#"#lead [autorewrite] down the wrong path.#"#%''% For instance: *)
Section garden_path.
Variable g : A -> A.
Hypothesis f_g : forall x, f x = g x.
Hint Rewrite f_g : my_db.
Lemma f_f_f' : forall x, f (f (f x)) = f x.
intros; autorewrite with my_db.
(** [[
============================
g (g (g x)) = g x
]]
*)
Abort.
(** Our new hint was used to rewrite the goal into a form where the old hint could no longer be applied. This %%#"#non-monotonicity#"#%''% of rewrite hints contrasts with the situation for [auto], where new hints may slow down proof search but can never %%#"#break#"#%''% old proofs. The key difference is that [auto] either solves a goal or makes no changes to it, while [autorewrite] may change goals without solving them. The situation for [eauto] is slightly more complicated, as changes to hint databases may change the proof found for a particular goal, and that proof may influence the settings of unification variables that appear elsewhere in the proof state. *)
Reset garden_path.
(** [autorewrite] also works with quantified equalities that include additional premises, but we must be careful to avoid similar incorrect rewritings. *)
Section garden_path.
Variable P : A -> Prop.
Variable g : A -> A.
Hypothesis f_g : forall x, P x -> f x = g x.
Hint Rewrite f_g : my_db.
Lemma f_f_f' : forall x, f (f (f x)) = f x.
intros; autorewrite with my_db.
(** [[
============================
g (g (g x)) = g x
subgoal 2 is:
P x
subgoal 3 is:
P (f x)
subgoal 4 is:
P (f x)
]]
*)
Abort.
(** The inappropriate rule fired the same three times as before, even though we know we will not be able to prove the premises. *)
Reset garden_path.
(** Our final, successful, attempt uses an extra argument to [Hint Rewrite] that specifies a tactic to apply to generated premises. Such a hint is only used when the tactic succeeds for all premises, possibly leaving further subgoals for some premises. *)
Section garden_path.
Variable P : A -> Prop.
Variable g : A -> A.
Hypothesis f_g : forall x, P x -> f x = g x.
(* begin thide *)
Hint Rewrite f_g using assumption : my_db.
(* end thide *)
Lemma f_f_f' : forall x, f (f (f x)) = f x.
(* begin thide *)
intros; autorewrite with my_db; reflexivity.
Qed.
(* end thide *)
(** [autorewrite] will still use [f_g] when the generated premise is among our assumptions. *)
Lemma f_f_f_g : forall x, P x -> f (f x) = g x.
(* begin thide *)
intros; autorewrite with my_db; reflexivity.
(* end thide *)
Qed.
End garden_path.
(** remove printing * *)
(** It can also be useful to use the [autorewrite with db in *] form, which does rewriting in hypotheses, as well as in the conclusion. *)
(** printing * $*$ *)
Lemma in_star : forall x y, f (f (f (f x))) = f (f y)
-> f x = f (f (f y)).
(* begin thide *)
intros; autorewrite with my_db in *; assumption.
(* end thide *)
Qed.
End autorewrite.
(** * Exercises *)
(** printing * $\cdot$ *)
(** %\begin{enumerate}%#<ol>#
%\item%#<li># I did a Google search for group theory and found #<a href="http://dogschool.tripod.com/housekeeping.html">#a page that proves some standard theorems#</a>#%\footnote{\url{http://dogschool.tripod.com/housekeeping.html}}%. This exercise is about proving all of the theorems on that page automatically.
For the purposes of this exercise, a group is a set [G], a binary function [f] over [G], an identity element [e] of [G], and a unary inverse function [i] for [G]. The following laws define correct choices of these parameters. We follow standard practice in algebra, where all variables that we mention are quantified universally implicitly at the start of a fact. We write infix [*] for [f], and you can set up the same sort of notation in your code with a command like [Infix "*" := f.].
%\begin{itemize}%#<ul>#
%\item%#<li># %\textbf{%#<b>#Associativity#</b>#%}%: [(a * b) * c = a * (b * c)]#</li>#
%\item%#<li># %\textbf{%#<b>#Right Identity#</b>#%}%: [a * e = a]#</li>#
%\item%#<li># %\textbf{%#<b>#Right Inverse#</b>#%}%: [a * i a = e]#</li>#
#</ul> </li>#%\end{itemize}%
The task in this exercise is to prove each of the following theorems for all groups, where we define a group exactly as above. There is a wrinkle: every theorem or lemma must be proved by either a single call to [crush] or a single call to [eauto]! It is allowed to pass numeric arguments to [eauto], where appropriate. Recall that a numeric argument sets the depth of proof search, where 5 is the default. Lower values can speed up execution when a proof exists within the bound. Higher values may be necessary to find more involved proofs.
%\begin{itemize}%#<ul>#
%\item%#<li># %\textbf{%#<b>#Characterizing Identity#</b>#%}%: [a * a = a -> a = e]#</li>#
%\item%#<li># %\textbf{%#<b>#Left Inverse#</b>#%}%: [i a * a = e]#</li>#
%\item%#<li># %\textbf{%#<b>#Left Identity#</b>#%}%: [e * a = a]#</li>#
%\item%#<li># %\textbf{%#<b>#Uniqueness of Left Identity#</b>#%}%: [p * a = a -> p = e]#</li>#
%\item%#<li># %\textbf{%#<b>#Uniqueness of Right Inverse#</b>#%}%: [a * b = e -> b = i a]#</li>#
%\item%#<li># %\textbf{%#<b>#Uniqueness of Left Inverse#</b>#%}%: [a * b = e -> a = i b]#</li>#
%\item%#<li># %\textbf{%#<b>#Right Cancellation#</b>#%}%: [a * x = b * x -> a = b]#</li>#
%\item%#<li># %\textbf{%#<b>#Left Cancellation#</b>#%}%: [x * a = x * b -> a = b]#</li>#
%\item%#<li># %\textbf{%#<b>#Distributivity of Inverse#</b>#%}%: [i (a * b) = i b * i a]#</li>#
%\item%#<li># %\textbf{%#<b>#Double Inverse#</b>#%}%: [i (][i a) = a]#</li>#
%\item%#<li># %\textbf{%#<b>#Identity Inverse#</b>#%}%: [i e = e]#</li>#
#</ul> </li>#%\end{itemize}%
One more use of tactics is allowed in this problem. The following lemma captures one common pattern of reasoning in algebra proofs: *)
(* begin hide *)
Variable G : Set.
Variable f : G -> G -> G.
Infix "*" := f.
(* end hide *)
Lemma mult_both : forall a b c d1 d2,
a * c = d1
-> b * c = d2
-> a = b
-> d1 = d2.
crush.
Qed.
(** That is, we know some equality [a = b], which is the third hypothesis above. We derive a further equality by multiplying both sides by [c], to yield [a * c = b * c]. Next, we do algebraic simplification on both sides of this new equality, represented by the first two hypotheses above. The final result is a new theorem of algebra.
The next chapter introduces more details of programming in Ltac, but here is a quick teaser that will be useful in this problem. Include the following hint command before you start proving the main theorems of this exercise: *)
Hint Extern 100 (_ = _) =>
match goal with
| [ _ : True |- _ ] => fail 1
| _ => assert True by constructor; eapply mult_both
end.
(** This hint has the effect of applying [mult_both] %\emph{%#<i>#at most once#</i>#%}% during a proof. After the next chapter, it should be clear why the hint has that effect, but for now treat it as a useful black box. Simply using [Hint Resolve mult_both] would increase proof search time unacceptably, because there are just too many ways to use [mult_both] repeatedly within a proof.
The order of the theorems above is itself a meta-level hint, since I found that order to work well for allowing the use of earlier theorems as hints in the proofs of later theorems.
The key to this problem is coming up with further lemmas like [mult_both] that formalize common patterns of reasoning in algebraic proofs. These lemmas need to be more than sound: they must also fit well with the way that [eauto] does proof search. For instance, if we had given [mult_both] a traditional statement, we probably would have avoided %%#"#pointless#"#%''% equalities like [a = b], which could be avoided simply by replacing all occurrences of [b] with [a]. However, the resulting theorem would not work as well with automated proof search! Every additional hint you come up with should be registered with [Hint Resolve], so that the lemma statement needs to be in a form that [eauto] understands %%#"#natively.#"#%''%
I recommend testing a few simple rules corresponding to common steps in algebraic proofs. You can apply them manually with any tactics you like (e.g., [apply] or [eapply]) to figure out what approaches work, and then switch to [eauto] once you have the full set of hints.
I also proved a few hint lemmas tailored to particular theorems, but which do not give common algebraic simplification rules. You will probably want to use some, too, in cases where [eauto] does not find a proof within a reasonable amount of time. In total, beside the main theorems to be proved, my sample solution includes 6 lemmas, with a mix of the two kinds of lemmas. You may use more in your solution, but I suggest trying to minimize the number.
#</ol>#%\end{enumerate}% *)
|
2018-09-19 20:43:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5629324316978455, "perplexity": 8824.411875874606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156305.13/warc/CC-MAIN-20180919200547-20180919220547-00433.warc.gz"}
|
https://complex-analysis.com/content/domain_coloring.html
|
Domain coloring
Complex phase portraits
A way to visualize complex functions $f:\mathbb{C}\to\mathbb{C}$ is using phase-portraits. A complex number can be assigned a color according to its argument/phase. Positive numbers are colored red; negative numbers are colored in cyan and numbers with a non-zero imaginary part are colored as in Figure 1, which shows a phase portrait for the function $f(z)=z$.
In his book Visual Complex Functions, Elias Wegert employs phase portraits with contour lines of phase and modulus (enhanced phase portraits) for the study of the theory of complex functions. See for example Figures 2 and 3 for the function $f(z)=z$.
We say that a complex funcion $f$ has a root (or a zero) at $z_0$, if $f(z_0)=0$. We say that $z_0$ is a pole when $f(z_0)$ is undefined. With the use of enhanced phase portraits, roots and poles of a complex function $f(z)$ can be easily spotted at the points where all colors meet. Figures 4 and 5 show the enhanced phase portrait of the functions $$f(z)=z \quad \text{and}\quad g(z)=1/z,$$ respectively. Observe the contrast between the level curves of modulus in each case.
Consider now the function \begin{eqnarray}\label{eq1} f(z)=\frac{z-1}{z^2+z+1} \end{eqnarray} which has a root at $z_0=1$ and two poles at $$z_{1}=\frac{-1 + \sqrt{3}\,i}{2} \quad \text{and} \quad z_{2}=\frac{-1 - \sqrt{3}\,i}{2}.$$
Figure 6 shows the enhanced portrait of (\ref{eq1}) with level curves of the modulus. Notice the behaviour of the level curves of the modulus around the root (right side) and the poles (left side). Can you see the difference?
Explore complex functions
Use the applet below to explore enhanced phase portraits of complex functions.
Sorry, the applet is not supported for small screens. Rotate your device to landscape. Or resize your window so it's more wide than tall.
Warning!
If we do not impose additional restrictions, like continuity or differentiability, the isochromatic sets of complex functions can be arbitrary - but this is not so for analytic functions, which are the objects of prime interest in this text.
In fact, analytic functions are (almost) uniquely determined by their (pure) phase portraits, but this is not so for general functions. For example, the functions $f$ (analytic) and $g$ (not analytic) defined by \begin{eqnarray}\label{example} f(z)=\frac{z-1}{z^2+z+1}, \qquad g(z)=(z-1)\left(\overline{z}^2+\overline{z}+1\right) \end{eqnarray} have the same phase (except at their zeros and poles) though they are completely different.
Since pure phase portraits do not always display enough information for exploring general complex functions, I recommend the use of their enhanced versions with contour lines of modulus and phase in such cases. Figure 7 shows two such portraits of the functions $f$ (left) and $g$ (right) defined in (\ref{example}).
Sorry, the applet is not supported for small screens. Rotate your device to landscape. Or resize your window so it's more wide than tall.
A notable distinction between the two portraits is the shape of the tiles. In the left picture most of them are almost squares and have right-angled corners. In contrast, many tiles in the portrait of $g$ are prolate and their angles differ significantly from $\pi/2$ - at some points the contour lines of modulus and phase are even mutually tangent.
NEXT: The Complex Power Function
|
2021-12-08 05:36:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7855492830276489, "perplexity": 612.4910095934938}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00296.warc.gz"}
|
https://www.semanticscholar.org/paper/A-Restricted-Second-Order-Logic-for-Time-Ferrarotti-Gonz%C3%A1lez/b111af3a8df3070818fb2128257651663919bb75
|
# A Restricted Second-Order Logic for Non-deterministic Poly-Logarithmic Time
@article{Ferrarotti2020ARS,
title={A Restricted Second-Order Logic for Non-deterministic Poly-Logarithmic Time},
author={Flavio Ferrarotti and Sen{\'e}n Gonz{\'a}lez and Klaus-Dieter Schewe and Jos{\'e} Maria Turull Torres},
journal={ArXiv},
year={2020},
volume={abs/1912.00010}
}
We introduce a restricted second-order logic $\mathrm{SO}^{\mathit{plog}}$ for finite structures where second-order quantification ranges over relations of size at most poly-logarithmic in the size of the structure. We demonstrate the relevance of this logic and complexity class by several problems in database theory. We then prove a Fagin's style theorem showing that the Boolean queries which can be expressed in the existential fragment of $\mathrm{SO}^{\mathit{plog}}$ corresponds exactly to… Expand
2 Citations
Proper Hierarchies in Polylogarithmic Time and Absence of Complete Problems
• Computer Science, Mathematics
• FoIKS
• 2020
This paper shows that the descriptive complexity theory of polylogarithmic time is taken further showing that there are strict hierarchies inside each of the classes of the hierarchy. Expand
Completeness in Polylogarithmic Time and Space
• Computer Science
• ArXiv
• 2020
An alternative notion of completeness inspired by the concept of uniformity from circuit complexity is developed and proved and it is shown that complete problems can still play an important role in the study of the interrelationship between polylogarithmic and other classical complexity classes. Expand
#### References
SHOWING 1-10 OF 31 REFERENCES
The Polylog-Time Hierarchy Captured by Restricted Second-Order Logic
• Mathematics, Computer Science
• 2018 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)
• 2018
The problem, which Turing machine complexity class is captured by Boolean queries over ordered relational structures that can be expressed in second-order logic, is investigated and the relevance of this logic and complexity class by several problems in database theory is demonstrated. Expand
A Second-Order Logic in Which Variables Range over Relations with Complete First-Order Types
• Computer Science
• 2010 XXIX International Conference of the Chilean Computer Science Society
• 2010
The complexity class NP$^F$ is defined by using a variation of the relational machine of S. Abiteboul and V. Vianu and it is proved that this complexity class is captured by $\Sigma^{1,F}_1$. Expand
Second-Order Logic over Strings: Regular and Non-regular Fragments
• Computer Science
• Developments in Language Theory
• 2001
An exhaustive classification of the regular and nonregular prefix classes of general second-order logic, and derive of complexity results for the corresponding model checking problems. Expand
Existential second-order logic over graphs: charting the tractability frontier
• Mathematics, Computer Science
• Proceedings 41st Annual Symposium on Foundations of Computer Science
• 2000
A dichotomy holds, i.e., each prefix class of existential second-order logic either contains sentences that can express NP-complete problems or each of its sentences expresses a polynomial-time solvable problem. Expand
Choiceless Polynomial Time
• Computer Science, Mathematics
• Ann. Pure Appl. Log.
• 1999
This work attempts to capture the choiceless fragment of PTime, a version of abstract state machines (formerly called evolving algebras) that is to replace arbitrary choice with parallel execution and is more expressive than other PTime logics in the literature. Expand
Capturing Complexity Classes by Fragments of Second-Order Logic
• E. Grädel
• Computer Science, Mathematics
• Theor. Comput. Sci.
• 1992
It is shown that all these logics collapse to their existential fragments and are strictly weaker than previously known logics for these classes and fail to express some very simple properties. Expand
Existential second-order logic over graphs: Charting the tractability frontier
• Mathematics, Computer Science
• JACM
• 2004
This article completely characterize the computational complexity of prefix classes of existential second-order logic in three different contexts: (1) over directed graphs, (2) over undirected graphs with self-loops and (3) over undirected graphs without self-Loops. Expand
Existential second-order logic over strings
• Mathematics, Computer Science
• JACM
• 2000
ESO and ESO are the maximal standard ESO-prefix classes contained in MSO, thus expressing only regular languages, and the following dichotomy theorem is proved: An ESO prefix-class either expresses onlyregular languages (and is thus in ESO), or it expresses some NP-complete languages. Expand
The complexity of theorem-proving procedures
• S. Cook
• Computer Science, Mathematics
• STOC
• 1971
It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be “reduced” to the problem of determining whether a given propositional formula is aExpand
A restricted second order logic for finite structures
We introduce a restricted version of second order logic SOω in which the second order quantifiers range over relations that are closed under the equivalence relation ≡k of k variable equivalence, forExpand
|
2021-12-03 07:59:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7758117914199829, "perplexity": 2358.2103608800758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00073.warc.gz"}
|
http://accessanesthesiology.mhmedical.com/content.aspx?bookid=353§ionid=40148782
|
Chapter 22
• General anesthesia typically reduces both V̇o2 and V̇co2 by about 15%. Additional reductions are often seen as a result of hypothermia. The greatest reductions are in cerebral and cardiac O2 consumption.
• At end-expiration, intrapleural pressure normally averages about –5 cm H2O and because alveolar pressure is 0 (no flow), transpulmonary pressure is +5 cm H2O.
• The lung volume at the end of a normal exhalation is called functional residual capacity (FRC). At this volume, the inward elastic recoil of the lung approximates the outward elastic recoil of the chest (including resting diaphragmatic tone).
• Closing capacity is normally well below FRC, but it rises steadily with age. This increase is probably responsible for the normal age-related decline in arterial O2 tension.
• Whereas both forced expiratory volume in 1 s (FEV1) and forced vital capacity (FVC) are effort dependent, forced midexpiratory flow (FEF25–75%) is effort independent and may be a more reliable measure of obstruction.
• Induction of anesthesia consistently produces an additional 15–20% reduction in FRC (400 mL in most patients) beyond what occurs with the supine position alone.
• Local factors are more important than the autonomic system in influencing pulmonary vascular tone. Hypoxia is a powerful stimulus for pulmonary vasoconstriction (the opposite of its systemic effect).
• Because alveolar ventilation (V̇a) is normally about 4 L/min and pulmonary capillary perfusion (Q̇) is 5 L/min, the overall V̇/Q̇ ratio is about 0.8.
• Shunting denotes the process whereby desaturated, mixed venous blood from the right heart returns to the left heart without being resaturated with O2 in the lungs. The overall effect of shunting is to decrease (dilute) arterial O2 content; this type of shunt is referred to as right-to-left.
• General anesthesia commonly increases venous admixture to 5–10%, probably as a result of atelectasis and airway collapse in dependent areas of the lung.
• Note that large increases in Paco2 (> 75 mm Hg) readily produce hypoxia (Pao2 < 60 mm Hg) at room air but not at high inspired O2 concentrations.
• The binding of O2 to hemoglobin appears to be the principal rate-limiting factor in the transfer of O2 from alveolar gas to blood.
• The greater the shunt, the less likely the possibility that an increase in the fraction of inspired oxygen (Fio2) will prevent hypoxemia.
• A rightward shift in the oxygen–hemoglobin dissociation curve lowers O2 affinity, displaces O2 from hemoglobin, and makes more O2 available to tissues; a leftward shift increases hemoglobin’s affinity for O2, reducing its availability to tissues.
• Bicarbonate represents the largest fraction of CO2 in blood.
• Central chemoreceptors are thought to lie on the anterolateral surface of the medulla and respond primarily to changes in cerebrospinal fluid [H+]. This mechanism is effective in regulating Paco2, because the blood–brain barrier is permeable to dissolved CO2 but not to bicarbonate ions.
• With increasing depth of anesthesia, the slope of the Paco...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
## Subscription Options
### AccessAnesthesiology Full Site: One-Year Subscription
Connect to the full suite of AccessAnesthesiology content and resources including procedural videos, interactive self-assessment, real-life cases, 20+ textbooks, and more
$995 USD ### Pay Per View: Timed Access to all of AccessAnesthesiology 24 Hour Subscription$34.95
48 Hour Subscription \$54.95
### Pop-up div Successfully Displayed
This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.
|
2017-02-20 18:06:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3313274383544922, "perplexity": 7861.016532551652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00566-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://codegolf.stackexchange.com/questions/219889/square-root-multiples
|
# Square root multiples
This is based on OEIS sequence A261865.
$$\A261865(n)\$$ is the least integer $$\k\$$ such that some multiple of $$\\sqrt{k}\$$ is in the interval $$\(n,n+1)\$$.
The goal of this challenge is to write a program that can find a value of $$\n\$$ that makes $$\A261865(n)\$$ as large as you can. A brute-force program can probably do okay, but there are other methods that you might use to do even better.
### Example
For example, $$\A261865(3) = 3\$$ because
• there is no multiple of $$\\sqrt{1}\$$ in $$\(3,4)\$$ (since $$\3 \sqrt{1} \leq 3\$$ and $$\4 \sqrt{1} \geq 4\$$);
• there is no multiple of $$\\sqrt{2}\$$ in $$\(3,4)\$$ (since $$\2 \sqrt{2} \leq 3\$$ and $$\3 \sqrt{2} \geq 4\$$);
• and there is a multiple of $$\\sqrt{3}\$$ in $$\(3,4)\$$, namely $$\2\sqrt{3} \approx 3.464\$$.
### Analysis
Large values in this sequence are rare!
• 70.7% of the values are $$\2\$$s,
• 16.9% of the values are $$\3\$$s,
• 5.5% of the values are $$\5\$$s,
• 2.8% of the values are $$\6\$$s,
• 1.5% of the values are $$\7\$$s,
• 0.8% of the values are $$\10\$$s, and
• 1.7% of the values are $$\\geq 11\$$.
### Challenge
The goal of this is to write a program that finds a value of $$\n\$$ that makes $$\A261865(n)\$$ as large as possible. Your program should run for no more than one minute and should output a number $$\n\$$. Your score is given by $$\A261865(n)\$$. In the case of a close call, I will run all entries on my 2017 MacBook Pro with 8GB of RAM to determine the winner.
For example, you program might output $$\A261865(257240414)=227\$$ for a score of 227. If two entries get the same score, whichever does it faster on my machine is the winner.
(Your program should not rely on information about pre-computed values, unless you can justify that information with a heuristic or a proof.)
• Uh, should we compute some large value in one minute or the guaranteed maximum of A(1), A(2), …, to some point A(n) within the time limit? – xash Feb 27 at 17:28
• @xash, compute one large value of A(n) within the time limit. – Peter Kagey Feb 27 at 18:31
• It doesn’t have to be a “record” value. (That is, it’s okay if there’s some m<n with A(m)>A(n).) – Peter Kagey Feb 27 at 18:40
• @Sheik, I updated the question to try to address any confusion. I hope this helps! – Peter Kagey Feb 28 at 7:18
• I think that the next best answer is going to have something to do with continued fraction. – user202729 Mar 1 at 9:35
# C++ (clang), score $$\399\$$
#include <iostream>
#include <cmath>
#include <vector>
void f(unsigned long n)
{
unsigned int m = 0;
while (1) {
unsigned long k = 1;
while (++k) {
const double sqrt_k = std::sqrt(static_cast<double>(k));
const double x = sqrt_k * std::ceil(n/sqrt_k);
if (n < x && x < n + 1) break;
}
if (k > m) {
std::cout << "<" << k << "> for A261865(" << n << ")\n";
m = k;
}
++n;
}
}
int main()
{
const unsigned long block_size = 100000000000UL;
const unsigned long num_threads = 12;
unsigned long start = 1UL;
for (int i = 0; i < num_threads - 1; ++i) {
start += block_size;
}
entry.join();
}
}
Try it online!
Gets $$\313\$$ on TIO.
Gets $$\399\$$ on my laptop (Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz with 16 GB RAM):
#include <iostream>
#include <cmath>
#include <sys/time.h>
#include <vector>
int sigTimerSet(int seconds)
{
std::cout << "Starting timer for " << seconds << " seconds.\n";
struct itimerval old_one, new_one;
new_one.it_interval.tv_usec = 0;
new_one.it_interval.tv_sec = 0;
new_one.it_value.tv_usec = 0;
new_one.it_value.tv_sec = static_cast<long int>(seconds);
if (setitimer (ITIMER_REAL, &new_one, &old_one) < 0)
return 0;
else
return old_one.it_value.tv_sec;
}
void f(unsigned long n)
{
unsigned int m = 0;
while (1) {
unsigned long k = 1;
while (++k) {
const double sqrt_k = std::sqrt(static_cast<double>(k));
const double x = sqrt_k * std::ceil(n/sqrt_k);
if (n < x && x < n + 1) break;
}
if (k > m) {
std::cout << "<" << k << "> for A261865(" << n << ")\n";
m = k;
}
++n;
}
}
int main()
{
sigTimerSet(60);
const unsigned long block_size = 100000000000UL;
unsigned long start = 1UL;
for (int i = 0; i < num_threads - 1; ++i) {
start += block_size;
}
entry.join();
}
}
produces:
Starting timer for 60 seconds.
<2> for A261865(1)
<3> for A261865(3)
<7> for A261865(23)
<15> for A261865(30)
<38> for A261865(184)
<3> for A261865(200000000001)
<2> for A261865(300000000001)
<3> for A261865(300000000003)
<<2> for A261865(60000000000143)
<<5> for A261865(3000000000065)
<<6> for A261865(300000000016)
<14> for A261865(300000000023)
<15> for A261865(300000000115)
<19> for A261865(300000000160)
<21> for A261865(300000000399)
<26> for A261865(300000000716)
<2> for A261865(400000000001)
<5> for A261865(400000000002)
<6> for A261865(400000000032)
<11> for A261865(400000000039)
<14> for A261865(400000000063)
<53> for A261865(400000000316)
<3> for A261865(600000000003)
<5> for A261865(600000000006)
<6> for A261865(600000000016)
<295> for A261865(400000002890)
<2> for A261865(900000000001)
<6> for A261865(900000000003)
<10> for A261865(900000000037)
<17> for A261865(900000000108)
<19> for A261865(900000000320)
<30> for A261865(900000000522)
<31> for A261865(900000001146)
<2> for A261865(800000000001)
<5> for A261865(800000000004)
<6> for A261865(800000000034)
<11> for A261865(800000000072)
<<39> for A261865(900000003574)
<35> for A261865(800000000079)
38<<> for A261865(300000002673)
> for A261865(<2> for A261865(100000000001)
<5> for A261865(100000000002)
> for A261865(<13> for A261865(100000000009)
<21> for A261865(100000000340)
<> for A261865(<29> for A261865(100000001644)
77> for A261865(<> for A261865(600000000027)
<10> for A261865(600000000037)
<15> for A261865(600000000068)
<21> for A261865(600000000139)
8091)
51> for A261865(900000008337)
<13> for A261865(1000000000001)
<15> for A261865(1000000000018)
<23> for A261865(1000000000124)
<46> for A261865(16060)
<<29> for A261865(1000000002029)
<38> for A261865(1000000002313)
200000000004)
<10> for A261865(200000000014)
<11> for A261865(200000000028)
<13> for A261865(200000000383)
<17> for A261865(200000000584)
<23> for A261865(200000000861)
71<46> for A261865(200000001329)
42> for A261865(300000003489)
<58> for A261865(200000003435)
<53> for A261865(300000004909)
<<62> for A261865(200000006771)
<30> for A261865(100000003880)
<37> for A261865(100000004041)
> for A261865(<38> for A261865(100000004980)
<<39> for A261865(100000005673)
700000000001)
<51> for A261865(100000008489)
<10> for A261865(700000000063)
<15> for A261865(700000000077)
<19> for A261865(700000000264)
500000000001<26> for A261865(700000001087)
39<30> for A261865(700000001941)
<<38> for A261865(700000002098)
800000005135<43> for A261865(700000002630)
29> for A261865(600000001409)
> for A261865()
<<5> for A261865(500000000004)
31> for A261865(600000003011)
)
1000000003259)
<42> for A261865(1000000005365)
<62> for A261865(300000022786)
<73> for A261865(700000007516)
<35> for A261865(600000003458)
<53> for A261865(1000000014642)
<58> for A261865(16907)
<67> for A261865(600000006910)
<55> for A261865(100000020333)
<79> for A261865(200000057626)
<21> for A261865(500000000018)
<<23> for A261865(500000001052)
<<33> for A261865(500000001414)
<35> for A261865(500000001568)
<<70> for A261865(300000025183)
62> for A261865(100000024519)
61> for A261865(20993)
65> for A261865(1000000025765)
85<74> for A261865(100000027954)
<97> for A261865(26286)
> for A261865(900000026060)
<66> for A261865(1000000037855)
<79> for A261865(100000035523)
<85> for A261865(200000088176)
<93> for A261865(900000037207)
<97> for A261865(300000061052)
<79> for A261865(600000074122)
<78> for A261865(800000098681)
<122> for A261865(400000213561)
<79> for A261865(700000110055)
<101> for A261865(900000115082)
<118> for A261865(130375)
<95> for A261865(600000136687)
<85> for A261865(700000175977)
<127> for A261865(169819)
<74> for A261865(1000000208668)
<95> for A261865(200000283049)
<95> for A261865(100000278291)
<79> for A261865(800000325129)
<83> for A261865(800000382758)
<101> for A261865(300000348034)
<82> for A261865(1000000377945)
<113> for A261865(600000368089)
<115> for A261865(600000388325)
<114> for A261865(900000458149)
<94> for A261865(800000523290)
<57> for A261865(500000002957)
<91> for A261865(500000020356)
<94> for A261865(500000086735)
<123> for A261865(1000000615915)
<139> for A261865(900000597046)
<113> for A261865(700000641956)
<122> for A261865(500000189745)
<138> for A261865(500000344679)
<109> for A261865(300000963142)
<129> for A261865(600001013758)
<139> for A261865(400001233496)
<177> for A261865(300001283249)
<107> for A261865(800001360974)
<110> for A261865(100001440670)
<119> for A261865(200001950373)
<127> for A261865(200002180270)
<119> for A261865(100001936762)
<130> for A261865(2135662)
<123> for A261865(700002323159)
<146> for A261865(600001831667)
<187> for A261865(2345213)
<109> for A261865(800002595588)
<145> for A261865(700002807851)
<155> for A261865(500003070679)
<194> for A261865(300003944386)
<127> for A261865(1000003765582)
<142> for A261865(1000003972101)
<165> for A261865(100003914551)
<110> for A261865(800004935331)
<141> for A261865(200005383478)
<186> for A261865(800005847131)
<157> for A261865(600005644046)
<185> for A261865(200006038768)
<178> for A261865(400008172024)
<183> for A261865(500008457011)
<163> for A261865(1000010164463)
<190> for A261865(800013713435)
<146> for A261865(900015388529)
<197> for A261865(600016805653)
<155> for A261865(700016980104)
<165> for A261865(1000017298685)
<185> for A261865(1000023033133)
<186> for A261865(100022787204)
<157> for A261865(700023653342)
<161> for A261865(700024255285)
<159> for A261865(900023644405)
<202> for A261865(900027523835)
<303> for A261865(700033230641)
<199> for A261865(300035412403)
<215> for A261865(500035617384)
<195> for A261865(800035729684)
<223> for A261865(600037942250)
<193> for A261865(46272966)
<221> for A261865(500060347857)
<197> for A261865(100072149460)
<226> for A261865(200071020948)
<219> for A261865(800096272342)
<273> for A261865(100096408342)
<195> for A261865(1000138255835)
<313> for A261865(600147859274)
<210> for A261865(300158020199)
<305> for A261865(300163012534)
<179> for A261865(400173203199)
<249> for A261865(500185649969)
<281> for A261865(900217000298)
<210> for A261865(222125822)
<222> for A261865(1000221604207)
<217> for A261865(237941698)
<227> for A261865(257240414)
<229> for A261865(1000299563639)
<191> for A261865(400330753112)
<237> for A261865(800371022066)
<197> for A261865(400388377102)
<222> for A261865(400519662337)
<247> for A261865(1000659372263)
<251> for A261865(400686850499)
<293> for A261865(900791054060)
<258> for A261865(201101992489)
<267> for A261865(1205703469)
<255> for A261865(501227521557)
<258> for A261865(501233713968)
<399> for A261865(1001313673399)
<299> for A261865(1558293414)
<271> for A261865(801556690220)
<286> for A261865(501665988151)
<257> for A261865(401963829763)
<357> for A261865(901991228083)
<290> for A261865(101979205483)
<293> for A261865(102084473850)
<282> for A261865(402539565301)
Alarm clock
Note the <399> for A261865(1001313673399)! :D
• 399.....woohoo! – Anush Feb 28 at 14:01
# JavaScript (Node.js), score $$\335\$$
Pretty straight-forward algorithm. I made it multithreaded to be a bit faster.
// Run with: node ./square-root-multiples.js
const findMaxK = (fromN, toN) => {
let maxK = 0;
let maxN = 0;
for (let n = fromN; n < toN; n++) {
for (let k = 2, s = n * n; k < s; k++) {
const r = Math.sqrt(k);
const x = r * Math.ceil(n / r);
if (x > n && x < n + 1) {
if (k > maxK) {
maxK = k;
maxN = n;
}
break;
}
}
}
return { n: maxN, k: maxK };
};
const BATCH_SIZE = 10000000;
const cluster = require("cluster");
if (cluster.isMaster) {
const start = Date.now();
let nextBatch = 0;
let maxK = 0;
const numCpus = require("os").cpus().length;
for (let i = 0; i < numCpus; i++) {
const worker = cluster.fork();
worker.on("message", (msg) => {
if (msg.k > maxK) {
maxK = msg.k;
console.log(
found A261865(${msg.n}) =${msg.k} in \${
(Date.now() - start) / 1000
}s
);
}
worker.send({ n: nextBatch, maxK });
nextBatch += BATCH_SIZE;
});
worker.send({ n: nextBatch });
nextBatch += BATCH_SIZE;
}
} else {
process.on("message", (msg) => {
process.send(findMaxK(msg.n, msg.n + BATCH_SIZE));
});
}
Output on my 2019 MacBook Pro:
found A261865(12779527) = 134 in 0.299s
found A261865(2345213) = 187 in 0.308s
found A261865(63856063) = 193 in 0.33s
found A261865(222125822) = 210 in 0.681s
found A261865(237941698) = 217 in 0.687s
found A261865(257240414) = 227 in 0.804s
found A261865(1217775885) = 230 in 2.704s
found A261865(1205703469) = 267 in 2.704s
found A261865(1558293414) = 299 in 3.306s
found A261865(4641799364) = 303 in 13.002s
found A261865(6600656102) = 323 in 20.045s
found A261865(11145613453) = 335 in 36.774s
found A261865(20641456345) = 354 in 73.047s
The highest K my machine found within 60s was A261865(11145613453) = 335.
• 335 is a great score. – Anush Feb 27 at 21:46
• @Noodle9 Might to be to do with the way he's batching his calculations? He actually calculates 4xxxxxxx in parallel with 6xxxxxxx so he'll find 63xxxxxx before 46xxxxxx. – Neil Feb 27 at 22:25
• @Neil Yes, just tried with SageMath and got A(63856063) -> 193 as well as A(46272966) -> 193. – Noodle9 Feb 28 at 0:07
# C (gcc), Score 335 on TIO
#include <stdio.h>
#include <math.h>
int main() {
double n = 0, k;
while( n += 13 ) {
for(k = 2; ceil(sqrt((n + 1) * (n + 1) / k)) - floor(sqrt(n * n / k)) < 2; k++);
if( k > 250 ) printf("%.lf -> %.lf\n", n, k);
}
return 0;
}
Try it online!
The algorithm to compute A(n) is took from the OEIS page.
The n for which A(n) is being computed, follows the heuristics below:
I computed these first few A(n) greater than 100
n -> A(n) | Factors of n
-------------------------------
130375 -> 118 | 5 5 5 7 149
169819 -> 127 | 13 13063
902236 -> 103 | 2 2 211 1069
1227105 -> 106 | 3 3 5 11 37 67
1793759 -> 105 | 11 179 911
1940874 -> 103 | 2 3 13 149 167
1962875 -> 105 | 5 5 5 41 383
2135662 -> 130 | 2 1067831
2345213 -> 187 | 13 13 13877
2470326 -> 111 | 2 3 411721
3461537 -> 115 | 3461537
4221630 -> 105 | 2 3 3 5 7 6701
4576794 -> 103 | 2 3 229 3331
5021205 -> 114 | 3 5 7 17 29 97
5396362 -> 110 | 2 2698181
8567238 -> 102 | 2 3 29 53 929
9182575 -> 103 | 5 5 89 4127
9983754 -> 102 | 2 3 3 11 50423
Where there are two notable local maximum: 127 and 187.
As you can see, in both cases the generating n is a multiple of 13, and that seemed to me a good enough reason to conjecture that, on average, multiples of 13 may generate an higher A(n) than most numbers of the same magnitude.
So I made a program to compute the multiples of 13 and leaving it running for a minute on TIO, the maximum I get is
11145613453 -> 335
• When I computed A(13^17) I got 5, not 498. Could there be an error in your code? – dingledooper Mar 1 at 10:05
• @dingledooper I tried it with the code of another answer and it gives 498 – Sheik Yerbouti Mar 1 at 10:55
• The answer I got says otherwise. Might it be possible that the other answer is also incorrect? – dingledooper Mar 1 at 11:14
• Yes, the actual result is 5, as evidenced by this Python code. Try it online! – user202729 Mar 1 at 12:22
• @dingledooper the algorithm is copied OEIS's page and is unlikely to be as broken as the math of your program. user202729 could you explain why your algorithm should work? Men if you don't like the "13 guess" just say it and if the OP thinks that my answer is invalid I will indeed delete it. – Sheik Yerbouti Mar 1 at 14:14
# Python 3, score $$\193\$$
from math import ceil, floor, sqrt
def f(n):
k = 2
while ceil(sqrt((n + 1)**2/k)) - floor(sqrt(n**2/k)) < 2:
k += 1
return k
Try it online!
Straight forward port of the Mathematica code on OEIS A261865 to get the ball rolling.
On my laptop (Intel(R) Core(TM) i7-8750H CPU @ 2.20 GHz with 16 GB RAM):
from functools import wraps
import errno
import os
import signal
class TimeoutError(Exception):
pass
def timeout(seconds=10, error_message=os.strerror(errno.ETIME)):
def decorator(func):
def _handle_timeout(signum, frame):
raise TimeoutError(error_message)
def wrapper(*args, **kwargs):
signal.signal(signal.SIGALRM, _handle_timeout)
signal.alarm(seconds)
try:
result = func(*args, **kwargs)
finally:
signal.alarm(0)
return result
return wraps(func)(wrapper)
return decorator
from math import ceil, floor, sqrt
@timeout(60)
def f():
n = 1
m = 1
while 1:
k = 2
while ceil(sqrt((n + 1)**2/k)) - floor(sqrt(n**2/k)) < 2:
k += 1
if k > m:
print(f'A({n}) -> {k}')
m = k
n += 1
f()
produces:
A(1) -> 2
A(3) -> 3
A(23) -> 7
A(30) -> 15
A(184) -> 38
A(8091) -> 43
A(16060) -> 46
A(16907) -> 58
A(20993) -> 61
A(26286) -> 97
A(130375) -> 118
A(169819) -> 127
A(2135662) -> 130
A(2345213) -> 187
A(46272966) -> 193
Traceback (most recent call last):
File "A261865_timeout.py", line 41, in <module>
f()
File "A261865_timeout.py", line 18, in wrapper
result = func(*args, **kwargs)
File "A261865_timeout.py", line 35, in f
while ceil(sqrt((n + 1)**2/k)) - floor(sqrt(n**2/k)) < 2:
File "A261865_timeout.py", line 12, in _handle_timeout
raise TimeoutError(error_message)
__main__.TimeoutError: Timer expired
• Note that the Python code in this answer suffers from a floating point errors that results in huge $k$ value for large input. Nevertheless the value reported by this answer is correct. – user202729 Mar 1 at 12:24
# C (gcc), A261865(6600656102)=323 on TIO
This out this checks every A(n) sequentially (well, based on the small CHUNKSIZE, and it skips a lot of values based on optimization, see the How it works).
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <stdint.h>
//#define L unsigned __int128
//#define F __float128
#define L unsigned long long
#define F long double
#define MAXN (1024ull*1024ull*1024ull*1024ull)
#define CHUNKSIZE (1024ull*1024ull)
#define CHECKS 8
#define BUFFER 1024
int main() {
F steps[CHECKS + BUFFER], values[CHECKS + BUFFER];
steps[0] = sqrtl(2) / (sqrtl(2) - 1);
steps[1] = sqrtl(3) / (sqrtl(3) - 1);
int n=5, min_check;
for (int i = 2; i < CHECKS + BUFFER; n++) {
int ok = 1;
for (int tmp = 2; tmp < sqrtl(n) + 0.5; tmp++)
if (n % (tmp * tmp) == 0) ok = 0;
if (!ok)
continue;
values[i] = n;
steps[i++] = sqrtl(n);
if (i == CHECKS) min_check = i;
}
char chunk[CHUNKSIZE] = { 0 };
#pragma omp parallel for private(chunk) schedule(static, 1)
for (L chunki = 0; chunki < (MAXN / CHUNKSIZE); chunki++) {
L start = chunki * CHUNKSIZE, end = (chunki + 1) * CHUNKSIZE;
for (L i = start / steps[1] + 1; i < end / steps[1]; i++)
chunk[(L)(i * steps[1]) - start] = 1;
for (int k = 2; k < CHECKS; k++)
for (L i = start / steps[k] + 1; i < end / steps[k]; i++)
chunk[(L)(i * steps[k]) - start] = 0;
for (L i = start / steps[0] + 1; i < end / steps[0]; i++) {
L xi = (L)(i * steps[0]) - start;
if (chunk[xi] == 1) {
chunk[xi] = 0;
L x = xi + start;
int k = min_check;
while (++k) {
F sqrt_k = steps[k];
F try = sqrt_k * ceill(x/sqrt_k);
if (x < try && try < x + 1) break;
}
k = values[k];
if (k > 249)
printf("A261865(%llu)=%d\n", x, k), fflush(stdout);
}
}
}
}
Try it online!
### How it works
We sieve through the numbers: floor(i * sqrt(n)) is a Beatty sequence. The nice thing about these is that their complement is easy to calculate. So if floor(i * sqrt(2)) hits 1 2 4 5 7 8 9 …, floor(i * (sqrt(2) / (sqrt(2) - 1))) hits 3 6 10 13 17 20 23 … This alone saves us about two third calculation time.
For n>4, sqrt(n) is actually bigger than (sqrt(2) / (sqrt(2) - 1). So we initialize an array of 0s, set every non-multiple of sqrt(3) to 1, and for every n between 3 and CHECKS we set every multiple of sqrt(n) back to 0. We then take the big steps of (sqrt(2) / (sqrt(2) - 1) and if there the array is set to 1, we know that A(i) is at least CHECKS.
A(n) must be a square free number, as the square of its square divisor (8 -> 4 -> 2) will already have hit the same numbers, so we can skip 4 8 9 12 16 …, too.
The actual computation of A(n) is borrowed from @Noodle9. But we can start with i = CHECKS.
Bundled all of this with OpenMP for easy multithreading for the chunks and some performance testing to get a good value for CHECKS, we get to 383 on TIO. If you want to get further than A(2^64), you need to define L and F to unsigned __int128 and __float128.
• Why would this find A261865(557784208155) = 383 but not A261865(105991039757) = 385 or A261865(119034690206) = 397? – Neil Feb 28 at 17:27
• @Neil Oh, OpenMP reorders the chunks more than I thought, thus the earlier values get skipped. This feels like a magic constant, so I'll drop the multithreading for now. Thanks for noticing! – xash Feb 28 at 17:36
• @Neil Fixed the chunk order to be sequentially. And some rounding errors have also crawled in. But the max value I get in TIO now gets verified by your version with __int128, so that is something I should have done earlier. :-) – xash Feb 28 at 18:03
# C (clang), score 210 on TIO
#include <stdio.h>
int main() {
unsigned long long x, i, j, k;
for (x = 2; x < 222125823ULL; x++) {
for (i = 1; ; i++) {
for (j = 2; j * j * i <= x * x; j *= 2);
for (k = j / 2; k /= 2; ) if ((j - k) * (j - k) * i > x * x) j -= k;
if (j * j * i < (x + 1) * (x + 1)) break;
}
if (i > 99) {
printf("A261865(%llu)=%llu\n", x, i);
fflush(stdout);
}
}
}
Try it online! Calculates all A261865 entries from 2 to 222,125,822 in just under 60s on TIO. Annoyingly, it doesn't seem to be possible to reach 250,000,000 within the time limit. In any case it is limited to about 2,000,000,000 due to the use of 64-bit integer arithmetic (no square root operations). By switching to unsigned __int128 you could quickly calculate A261865 for specific higher values of course.
Edit: Just tried a threaded version on my local 16-core processor and it can reach 2,000,000,000 in just under 60 seconds, finding A261865(1558293414)=299. Soon after that it stops working because j reaches 4,294,967,296 and then j * j overflows to zero.
Edit: I left the 128-bit single-threaded version running overnight and it found the following record values for A261865:
A261865(3)=3 (0s)
A261865(23)=7 (0s)
A261865(30)=15 (0s)
A261865(184)=38 (0s)
A261865(8091)=43 (0s)
A261865(16060)=46 (0s)
A261865(16907)=58 (0s)
A261865(20993)=61 (0s)
A261865(26286)=97 (0s)
A261865(130375)=118 (0s)
A261865(169819)=127 (0s)
A261865(2135662)=130 (0.7s)
A261865(2345213)=187 (0.8s)
A261865(46272966)=193 (17.8s)
A261865(222125822)=210 (91.7s)
A261865(237941698)=217 (98.6s)
A261865(257240414)=227 (1.8m)
A261865(1205703469)=267 (8.9m)
A261865(1558293414)=299 (11.6m)
A261865(4641799364)=303 (34.9m)
A261865(6600656102)=323 (49.9m)
A261865(11145613453)=335 (85m)
A261865(20641456345)=354 (2.7h)
A261865(47964301877)=358 (6.3h)
A261865(105991039757)=385 (14.2h)
A261865(119034690206)=397 (16h)
A261865(734197670805)=455 (4.4d)
• I can read a comment of the OP saying that you should "compute one large value of A(n) within the time limit" rather all the A(n) till a certain maximum. Am I right to be confused? – Sheik Yerbouti Feb 27 at 20:37
• @SheikYerbouti I am also confused. I can easily adapt my code to calculate one value e.g. A261865(20641456345) in a fraction of a second. I don't know how I'm supposed to find a nice large value of A261865 to compute in the first place though... – Neil Feb 27 at 20:41
• I think that every answer here adopted this interpretation of finding all A(n) till a certain maximum. It surely is an easier way to score the answers, but if the OP won't aknowledge it, then your score would be inaccurate. (And you couldn't easily make an accurate one in C) – Sheik Yerbouti Feb 27 at 20:53
• @Anush Now 455! – Neil Mar 4 at 13:08
• @Anush Now 501! (on my other answer) – Neil Mar 7 at 12:16
# C (clang), score 299 on TIO
#include <stdio.h>
#define A261865 256
unsigned long long J[A261865], K[A261865];
int main() {
int i, m;
unsigned long long x, j, k;
for (i = 1; i < A261865; i++) J[i] = K[i] = i;
m = 2;
for (x = 1; x < 0x60000000ULL; x++) {
for (i = 1; ; i++) {
if (i < A261865) {
while (J[i] <= x * x) J[i] += K[i] += i + i;
if (J[i] < (x + 1) * (x + 1)) break;
} else {
for (j = k = i; j <= x * x; j += k += i + i);
if (j < (x + 1) * (x + 1)) break;
}
}
if (i > m) {
printf("A261865(%llu)=%d\n", x, i);
fflush(stdout);
m = i;
}
}
}
Try it online! Takes under 60s to output:
A261865(3)=3
A261865(23)=7
A261865(30)=15
A261865(184)=38
A261865(8091)=43
A261865(16060)=46
A261865(16907)=58
A261865(20993)=61
A261865(26286)=97
A261865(130375)=118
A261865(169819)=127
A261865(2135662)=130
A261865(2345213)=187
A261865(46272966)=193
A261865(222125822)=210
A261865(237941698)=217
A261865(257240414)=227
A261865(1205703469)=267
A261865(1558293414)=299
Using a version with a 128-bit integer type and larger square root cache, this code has found the following values on my PC:
A261865(3)=3 (0s)
A261865(23)=7 (0s)
A261865(30)=15 (0s)
A261865(184)=38 (0s)
A261865(8091)=43 (0s)
A261865(16060)=46 (0s)
A261865(16907)=58 (0s)
A261865(20993)=61 (0s)
A261865(26286)=97 (0s)
A261865(130375)=118 (0s)
A261865(169819)=127 (0s)
A261865(2135662)=130 (0s)
A261865(2345213)=187 (0s)
A261865(46272966)=193 (1s)
A261865(222125822)=210 (5.2s)
A261865(237941698)=217 (5.7s)
A261865(257240414)=227 (6.2s)
A261865(1205703469)=267 (31s)
A261865(1558293414)=299 (41.8s)
A261865(4641799364)=303 (2.1m)
A261865(6600656102)=323 (3m)
A261865(11145613453)=335 (5.2m)
A261865(20641456345)=354 (9.8m)
A261865(47964301877)=358 (22.9m)
A261865(105991039757)=385 (52m)
A261865(119034690206)=397 (59.1m)
A261865(734197670865)=455 (6.4h)
A261865(931392113477)=501 (8.4h)
A261865(1560674332481)=505 (14.2h)
It also found A261865(5928861186373)=509, but I wasn't able to run it continuously for the ~2 days needed, so I don't know how long it took.
|
2021-04-23 02:30:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 35, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.369415283203125, "perplexity": 8372.882148384224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00090.warc.gz"}
|
https://ssconlineexam.com/forum/362/What-is-the-geometric-means-of-the-observations
|
# What is the geometric means of the observations 125, 729, 1331?
[ A ] 495 [ B ] 1485 [ C ] 2221 [ D ] None of these
Answer : Option A Explanation : G.M. = $3\sqrt{125 X 729 X 1331}$ = $3\sqrt{5^3 X 9^3 X 11^3}$ = 5 × 9 × 11 = 495
|
2020-07-14 10:38:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6862940192222595, "perplexity": 1815.64071064207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149819.59/warc/CC-MAIN-20200714083206-20200714113206-00395.warc.gz"}
|
https://mathoverflow.net/questions/269491/bounding-the-norm-of-a-contraction-matrix
|
# Bounding the norm of a contraction matrix
Suppose I have positive semidefinite matrices $A$ and $B$. Then
$$\begin{bmatrix} A & X\\ X^T & B\end{bmatrix} \succeq 0$$
for $X = A^{\frac 12} C B^{\frac 12}$, where $C$ is the contraction matrix with maximum eigenvalue less than $1$.
Horn, Roger A.; Johnson, Charles R., Topics in matrix analysis, Cambridge etc.: Cambridge University Press. viii, 607 p. \sterling 45.00; \$59.50 (1991). ZBL0729.15001. I have some questions: 1. Is it possible for$C$to have negative eigenvalues? 2. Are their any properties of$C$other than eigenvalue$< 1$? Please, suggest a book or something. 3. Is it possible to compute the maximum bounds of the matrix$C$(Where any random contraction is inside that maximum bound)? I shall be very thankful for any guidance and suggestion. Thanks. ## 1 Answer We have the following linear matrix inequality (LMI) $$\begin{bmatrix} \mathrm A \,\, & \mathrm X\\ \mathrm X^{\top} & \mathrm B\end{bmatrix} \succeq \mathrm O$$ where$\mathrm X = \mathrm A^{\frac 12} \mathrm C \, \mathrm B^{\frac 12}$and$\mathrm A, \mathrm B \succeq \mathrm O$. Hence, $$\begin{bmatrix} \mathrm A^{\frac 12} \mathrm A^{\frac 12} & \mathrm A^{\frac 12} \mathrm C \, \mathrm B^{\frac 12}\\ \mathrm B^{\frac 12} \mathrm C^{\top} \mathrm A^{\frac 12} & \mathrm B^{\frac 12} \mathrm B^{\frac 12}\end{bmatrix} = \begin{bmatrix} \mathrm A^{\frac 12} & \\ & \mathrm B^{\frac 12}\end{bmatrix} \begin{bmatrix} \mathrm I & \mathrm C\\ \mathrm C^{\top} & \mathrm I\end{bmatrix} \begin{bmatrix} \mathrm A^{\frac 12} & \\ & \mathrm B^{\frac 12}\end{bmatrix} \succeq \mathrm O$$ which holds if $$\begin{bmatrix} \mathrm I & \mathrm C\\ \mathrm C^{\top} & \mathrm I\end{bmatrix} \succeq \mathrm O$$ Using the Schur complement, the LMI above can be rewritten in the form $$\mathrm I - \mathrm C^{\top} \mathrm C \succeq \mathrm O$$ which is equivalent to $$\lambda_{\min} (\mathrm I - \mathrm C^{\top} \mathrm C) = 1 - \lambda_{\max} (\mathrm C^{\top} \mathrm C) = 1 - \| \mathrm C \|_2^2 \geq 0$$ and, thus, we obtain an upper bound on the spectral norm of$\rm C$$$\color{blue}{\| \mathrm C \|_2 \leq 1}$$ We conclude that $$\| \mathrm C \|_2 \leq 1 \implies \begin{bmatrix} \mathrm A \,\, & \mathrm X\\ \mathrm X^{\top} & \mathrm B\end{bmatrix} \succeq \mathrm O$$ If$\rm C$is symmetric, then its eigenvalues are real and its eigenvectors are orthogonal. Let its spectral decomposition be$\rm C = Q \Lambda Q^{\top}$. Hence, $$\mathrm I - \mathrm C^{\top} \mathrm C = \mathrm I - \mathrm C^2 = \mathrm Q \, \left( \mathrm I - \Lambda^2 \right) \, \mathrm Q^{\top} \succeq \mathrm O$$ which is equivalent to$\mathrm I - \Lambda^2 \succeq \mathrm O$, i.e., all the eigenvalues of$\rm C$are in$[-1,1]$. • Thank you very much for explaining the detail. Actually in my case, I have two known positive semi-definite matrices A and B and the matrix X is unknown. I also have the representation of X in terms of contraction matrix C. What I need is some kind of parametric form of matrix C which can give me the bounds of matrix X? May 15, 2017 at 2:02 • What is the definition of "contraction matrix"? Is it symmetric? Are the eigenvalues real? Is the$(2,1)$-th block$\rm X$or$\rm X^{\top}$, after all? May 15, 2017 at 9:18 • The (2,1) block is X. Since, I assume that matrix X is positive semi definite, I can make the symmetric assumption on contraction matrix C. May 15, 2017 at 10:01 • If the$(2,1)$block is$\rm X\$ then the block matrix is not symmetric. Only the symmetric part contributes to a quadratic form. May 15, 2017 at 10:03
• I am sorry, the (1,2) block is X and (2,1) block is X^T. The joint matrix is a co variance matrix. May 15, 2017 at 11:55
|
2022-09-27 23:39:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104934334754944, "perplexity": 308.08832030297566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00158.warc.gz"}
|
https://www.physicsforums.com/threads/integration-by-substituition-question.59811/
|
# Integration by substituition question.
1. Jan 14, 2005
### misogynisticfeminist
I'm able to do this question but my answer is different from that in the book.
$$\int \frac {x^3}{\sqrt{ x^2 -1}} dx$$
what I did was to take the substituition $$u= (x^2-1)^1^/^2$$
so, $$x^2 -1 = u^2$$
$$x^2 = u^2+1$$
$$x= (u^2+1)^1^/^2$$
$$x^3= (u^2+1)^3^/^2$$
$$dx= \frac {1}{2}(u^2+1)^-^1^/^2 (2u)du$$
$$dx= u(u^2+1)^-^1^/^2 du$$
$$\int \frac {x^3}{\sqrt{ x^2 -1}} dx= \int \frac {(u^2+1)^3^/^2}{u} . \frac {u}{(u^2+1)^1^/^2} du$$
which simplifies to,
$$\int u^2+1 du$$
$$\frac {1}{3} u^3 +u+C$$
$$\frac {1}{3} (x^2-1)^3^/^2 + (x^2-1)^1^/^2$$
that's my final answer but the book gave,
$$\frac {1}{3} (x^2+2)\sqrt{x^2-1}+C$$
where does my mistake lie?
2. Jan 14, 2005
### Hurkyl
Staff Emeritus
3. Jan 14, 2005
### dextercioby
Yap,it's the same "animal".It's just the fur is a little shady...
Daniel.
4. Jan 14, 2005
### MathStudent
5. Jan 14, 2005
### vincentchan
$$\frac {1}{3} (x^2-1)^3^/^2 + (x^2-1)^1^/^2$$
$$= (x^2-1)^1^/^2 (\frac {1}{3}(x^2-1) +1)$$
$$= (x^2-1)^1^/^2 (\frac {1}{3}x^2-\frac {1}{3}+1)$$
$$= (x^2-1)^1^/^2 (\frac {1}{3}x^2+\frac {2}{3})$$
$$\frac {1}{3} (x^2+2)\sqrt{x^2-1}$$ :surprised:
6. Jan 14, 2005
### Curious3141
Isn't y^(3/2) = (y)(y^(1/2)) ? Rearrange and simplify what you've got and it should come out the same.
EDIT : NM, Vincent has shown the working
7. Jan 15, 2005
### misogynisticfeminist
hey thanks, that's a very good tip !
I am wondering if the answer which I gave is in a different form from the one in the answer script during an exam, will I be penalised?
8. Jan 15, 2005
### dextercioby
Unless your teacher is a narrow minded s.o.b.,i don't see why.If i were u,on this integral i would have gone for another substitution,using hyperbolic sine and cosine.
Daniel.
|
2017-02-27 03:10:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7655555605888367, "perplexity": 4938.873807318634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172404.62/warc/CC-MAIN-20170219104612-00297-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://duongphong.vip/hq1t4bw/eigenvalues-of-a-matrix.php
|
# Eigenvalues of a matrix
Equation (1) is the eigenvalue equation for the matrix A. Description. That is, the characteristic equation may have repeated roots. Since this is a Laplacian matrix, the smallest eigenvalue is $\lambda_1 = 0$. The first thing we need to do is to define the transition matrix. In linear algebra , the eigenvalue is the scalar factor of a non-zero vector which denotes the scalar quantity by which a linear transformation changes . λ= 0: We want x= (x 1,x 2) such that 2 6 1 3 −0 1 0 0 1 x 1 x 2 = 0 0 The coefficient matrix of this Eigenvectors of real symmetric matrices are orthogonal similar to an upper triangular matrix, with the eigenvalues on the diagonal (the precise statement is Positive semidefinite decomposition, Laplacian eigenvalues, and the oriented incidence matrix 11 Eigenvalues of a sum of Hermitian positive definite circulant matrix and a positive diagonal matrix matrix has only a nite spectrum of eigenvalues; but one might reasonably hope that the middle part of this spectrum, away from the edges, will still have similar properties to the actual in nite spectrum of the nucleus. eigvals instead. Complex Eigenvalues 2. In this equation A is an n-by-n matrix, v is a non-zero n-by-1 vector and λ is a scalar (which may be either real or complex). Understand the geometry of 2 × 2 and 3 × 3 matrices with a complex eigenvalue. eigh (a, UPLO='L') [source] ¶ Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix. 1. No cable box required. FactLet $\lambda_1 \le \lambda_2 \le \lambda_3 \le \lambda_4$ be the eigenvalues of this matrix. If you're behind a web filter, please make sure that the domains *. If A is a real symmetric, n n matrix, then A has real eigenvalues and there exists an orthonormal basis of Rn consisting of eigenvectors of A. The eigenvalues of the 2-by-2 block are also eigenvalues of A: Edexcel FP3 June 2015 Exam Question 3a. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. The eigenvectors for D 0 (which means Px D 0x/ fill up the nullspace. Learn to recognize a rotation-scaling matrix, and compute by how much the matrix rotates and scales. . org and *. matrix has only a nite spectrum of eigenvalues; but one might reasonably hope that the middle part of this spectrum, away from the edges, will still have similar properties to …Positive semidefinite decomposition, Laplacian eigenvalues, and the oriented incidence matrix 11 Eigenvalues of a sum of Hermitian positive definite circulant matrix and a positive diagonal matrix2/14/2008 · When the matrix is large, the matrix A is typically factored as a product of 3 matrices A=U*D*V where D is diagonal and its elements are the eigenvalues of A, and U and V have nice properties. Once we've got that down we'll practice finding eigenvalues by going Given a particular eigenvalue λ of the n by n matrix A, define the set E to Therefore, the eigenvalues of A are λ = 4,−2. Proposition. 3. The matrix S has the real eigenvalue as the first entry on the diagonal and the repeated eigenvalue represented by the lower right 2-by-2 block. The 3x3 matrix can be thought of as an operator - it takes a vector, operates on it, and returns a new vector. Householder’s method is used to construct a similar tridiagonal matrix. Example: a matrix with 3 rows and 5 columns can be added to another matrix …Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors 45 min 4 Examples Overview and Definition of Eigenvalues and Eigenvectors Example #1 find the Eigenvalue for the given Eigenvector Example #2 find the Eigenvector given its corresponding Eigenvalue Example #3 find a basis for the corresponding Eigenspace Example #4 find a basis for the corresponding Eigenspace Exploring the…Revision on eigenvalues and eigenvectors The eigenvalues or characteristic root s of an N×N matrix A are the N real or complex number λi such that the equation Ax = λx has non-trivial solutions λ1, λ2, λ3, , λN . 03 Symmetric Matrices Jeremy Orlo Symmetric matices are very important. linalg. An eigenvalue is a scalar and is symbolized by the Greek letter lambda, but for simplification, it is abbreviated to L. In factor analysis, eigenvalues are used to condense the variance in a correlation matrix. Thus the jth eigenvalue is r[1,j] + i c[1,j]. , the multiplicity of every eigenvalue is 1. Calculating eigenvalues and eigenvectors for age- and stage-structured populations is made very simple by computers. It decomposes matrix using LU and Cholesky decomposition. A matrix equation of this form can only be solved if the determinant of the matrix is nonzero (see Cramer’s Rule) – that is, if det(A−λI) = 0 Since this equation is a polynomial in λ, commonly called the characteristic polynomial, we only need to find the roots of this polynomial to find the eigenvalues. wikidot. 11) express the same transformation, but in di erent bases). 22 Jun 2018 Together we'll learn how to find the eigenvalues of any square matrix. 1. Theorem. By definition ‚ is an eigenvalue of A if there is a nonzero vector ~v in Rn such that A~v = ‚~v ‚~v ¡ A~v = ~0 (‚In ¡ A)~v = ~0An an eigenvector, ~v needs to be a nonzero vector. The second eigenvector is (1,−1)—its signs are reversed by R. the vector the determinant of a matrix is the product of its eigenvalues. The eigenvalues of An are the eigenvalues of A raised to the n Rayleigh Quotient form of eigenvalues Computing Eigenvalues and Eigenvectors It is not too difficult to compute eigenvalues and their corresponding eigenvectors when the matrix transformation at hand has a clear geometric interpretation. ) Then the eigenvalues are found by using the quadratic formula, as usual. Any value of λ for which this equation has a solution is known as an eigenvalue of the matrix A. 4. If A is symmetric then the eigenvalues are real. In the lattice approximation of the Schr odinger operator + V is replaced with a large symmetric sparse matrix with random diagonal entries. The techniques used here are practical for $2 \times 2$ and $3 \times 3$ matrices. (27) 4 Trace, Determinant, etc. Suppose that $A$ is a square matrix of size $n$, $\vect{x}\neq\zerovector$ is a vector in $\complex{n}$, and Eigenvalues and Eigenvectors import numpy as np import matplotlib. The eigenvalues of a matrix m are those for which for some nonzero eigenvector . eigsh1. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The Eigenvalues of a 2x2 Matrix calculator computes the eigenvalues associated with a 2x2 matrix. In particular, we Eigenvalues and Eigenvectors Calculator for a 5 X 5 Real Matrix This page contains a routine that numerically finds the eigenvalues and eigenvectors of a 5 X 5 Real Matrix. Let's see if visualization can make these ideas more intuitive. So if you only need the eigenvalues of a matrix then do not use linalg. There is a converse to the above correspondence between the eigenvalues of awhere D0is a tridiagonal matrix. It is natural to search for the \best" basis for a given matrix, i. Also, the eigenvalues and eigenvectors can be used to calculate the matrix exponential of the system matrix (through spectral decomposition). Such an x is called an eigenvector corresponding to the eigenvalue λ. ru Thanks to: Philip Petrov (https://cphpvb. elements. Definition of Dominant Eigenvalue and Dominant Eigenvector Let and be the eigenvalues of an matrix A. Trạng thái: Đã giải quyếtTrả lời: 1Eigenvectors and Eigenvalues explained visuallysetosa. the other discs, then it contains ‘eigenvalues. Many problems present themselves in terms of an eigenvalue problem: A·v=λ·v. With the program EIGENVAL. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. . Example. With help of this calculator you can: find the matrix determinant, the rank, raise the matrix to a power, find the sum and the multiplication of matrices, calculate the inverse matrix…Definition EEM Eigenvalues and Eigenvectors of a Matrix. 5. the rows must match in size, and the columns must match in size. Hence the eigenvalues ( ) vary …Define Eigenvalues. It is important to note that only square matrices have eigenvalues and eigenvectors associated with them. The dimension of the eigenspace corresponding to an eigenvalue is less than or equal to the multiplicity of that eigenvalue. The eigenvalues and eigenvectors of a matrix are scalars and vectors such that . As we said before, this is actually unlikely to happen for a random matrix. Y3 [in the sense that "quadratic" convergence has exponent 2]. Note that the matrix …Hi Everyone, Assume that we have a real symmetric matrix $H$, which can be written in the form $H=D \cdot B$, where $D$ is a positive diagonal matrix, and $B$ is a The eigenvalues are immediately found, and finding eigenvectors for these matrices then becomes much easier. org are unblocked. Note: • If B in another n×n matrix, then eA+B = eAeB if and only if AB = BA. Eigenvalues and eigenvectors of a nonsymmetric matrix. Equation (1) can be stated equivalently as (A − λ I) v = 0, {\displaystyle (A-\lambda I)v=0,} (2) where I is the n by n identity matrix and 0 is the This calculators computes determinant, inverses, rank, transpose, characteristic polynomial ,eigenvalues and eigenvectors, LU and Cholesky decomposition. The characteristic polynomial of T is the characteristic polynomial of a matrix of T relative to a basis of V. The QR Method for Eigenvalues . Definition: A scalar λ is called an eigenvalue of the n × n matrix A is there is a nontrivial solution x of Ax = λx. Eigenvalues synonyms, Eigenvalues pronunciation, Eigenvalues translation, English dictionary definition of Eigenvalues. Eigenvalues and eigenvectors The subject of eigenvalues and eigenvectors will take up most of the rest of the course. What are eigenvectors and eigenvalues? Author. Also, any polynomial is the characteristic polynomial of a matrix. So if denotes the entry in the -th row and -th column then Calculator of eigenvalues and eigenvectors. Of course in the case of a symmetric matrix, AT = A, so this says that eigenvectors for A corresponding to di erent eigenvalues must be orthogonal. for a basis in which EigenValues is a special set of scalar values, associated with a linear system of matrix equations. Household sharing included. That is a longer story. Since every linear operator is given by left multiplication by some square matrix, finding the eigenvalues and eigenvectors of a linear operator is equivalent to finding the eigenvalues and eigenvectors of the associated square matrix; this is the terminology that will be followed. if you have a real-valued square symmetric matrices (equal to its transpose) then use scipy. Then the eigenvalue equation is Hx =Ax, with X~ 0. 286 Chapter 6. Example # 1: Find the eigenvalues and a basis for each eigenspace in for . , n. What does this mean geometrically?Get the free "Eigenvalues Calculator 3x3" widget for your website, blog, Wordpress, Blogger, or iGoogle. Let A be any n n matrix. e. *XP the eigenvalues up to a 4*4 matrix can be calculated. then the characteristic equation is . We compute det(A−λI) = 2−λ −1 1 2−λ = (λ−2)2 +1 = λ2 −4λ+5. eig returns both the eigenvalues and eigenvectors; scipy. In the simple floating-point eigenvalue problem, if A has either the symmetric or the hermitian indexing function then the returned object has float [ 8 …2 1. As the eigenvalues are real, and therefore can be ordered, we denote by i(A) the ith largest eigenvalue of A. • To do this, we find the values of λ which satisfy the characteristic equation of the matrix A, namely those values of λ for which. 3. 6/28/2018 · The eigenvalues of an upper triangular matrix (including a diagonal matrix) are the entries on the main diagonal Proof : a) By definition, each eigenvalue is a root of …Eigenvalues of a matrix in the streaming model Alexandr Andoni∗ Huy L. To state a further important property of eigenvalues of symmetric matrices, we need the following deflnition. Also, a 2x2 matrix ab cd ⎡⎤ ⎢⎥ ⎣⎦ is nonsingular iff ad – bc is nonzero. However, in Example ESMS4 , the matrix has only real entries, but is also symmetric, and hence Hermitian. We will use some specific matrices as examples here. Leave extra cells empty to enter non-square matrices. Theorem 1: The eigenvalues of a triangular matrix are the entries on its main diagonal. Computation of det(A - λ I) =0 leads to the Characteristic Polynomial, where the roots of this polynomial are the eigenvalues In matrix theory, the trace of a matrix is the sum of its eigenvalues and the determinant is the product of its eigenvalues. Eigenvalues andEigenvectors. If is an matrix of form [2] and is a real eigenvalue of then where is nilpotent of order and For a 3*3 and 4*4 matrix this is time consuming and complicated if Matlab or Maple is not available. A scalar λ is said to be a eigenvalue of A, if Ax = λx for some vector x 6= 0. • The eigenspace of A associated with the eigenvalue 1 is the line spanned by v1 = (−1,1). Find All the Eigenvalues of 4 by 4 Matrix (This page) Find a Basis of the Eigenspace Corresponding to a Given Eigenvalue Diagonalize a 2 by 2 Matrix if Diagonalizable The eig function can calculate the eigenvalues of sparse matrices that are real and symmetric. Tool to calculate eigenvalues of a matrix. The eigenvalues of a matrix are the roots of its characteristic equation. Spectral Theorem: A real n nsymmetric matrix has northogonal eigenvectors with real eigenvalues. In the case where A has n real and distinct eigenvalues, we have already solved the system by using the solutions eλitv 5 Numerical computation of eigenvalues The second basic problem of linear algebra is to compute the eigenvalues and eigenvectors of a square matrix. We have some properties of the eigenvalues of a matrix. Find the eigenvalues and eigenvectors of the matrix 2 6 1 3 From the above discussion we know that the only possible eigenvalues of Aare 0 and 5. Recipe: a 2 × 2 matrix with a complex eigenvalue is similar to a rotation-scaling matrix. e. For the matrix A = []. We call p( ) = det(A I) the characteristic polynomial of A. 1 Overview of diagonalizations We have seen that a transformation matrix looks completely di erent in di erent bases (the matrices (5. FINDING EIGENVALUES • To do this, we find the …9/13/2016 · Edexcel FP3 June 2015 Exam Question 3a. Let A be a square matrix of order n. Eigenvalue. (2) (Iis the identity matrix). I Eigenvectors corresponding to distinct eigenvalues are orthogonal. n maths physics one of the particular values of a certain2/6/2010 · I am having a very difficult time finding the eigenvalues and vectors of the following matrix. scipy. The non-symmetric problem of finding eigenvalues has two different formulations: finding vectors x such that Ax = λx, and finding vectors y such that y H A = λy H (y H implies a complex conjugate transposition of y). Its roots are 1 = 1+3i and 2 = 1 = 1 3i: The eigenvector corresponding to 1 is ( 1+i;1). for one of the complex eigenvalues of . htmlThe eig function can calculate the eigenvalues of sparse matrices that are real and symmetric. eig returns both the eigenvalues and eigenvectors; scipy. You can convince yourself that this result is reasonable by considering the constant matrix, C , for which every element is identically 0. The pattern of the dots depends on that the matrices A+E are real. the eigenvalues of A were the entries on the main diagonal of A. If the resulting V has the same size as A, the matrix A has a full set of linearly independent eigenvectors that satisfy A*V = V*D. When the multiplicities of some of a matrix's eigenvalues of greater than 1 it is not diagonalizable but instead for any matrix A there exists an invertible matrix V such that Definition EEM Eigenvalues and Eigenvectors of a Matrix. This is particularly true if some of the matrix entries involve symbolic parameters rather than speciflc numbers. eigenvalues of the matrix, since the matrix I A is singular precisely when is an eigenvalue of A . If follows that and , where denotes a complex conjugate, and denotes a transpose. Computes eigenvalues and eigenvectors of the generalized selfadjoint eigen problem. By the formal determinant of a matrix FE M,(Pn) we mean the matrix det F which we obtain by developing the determinant of F, considering the (commuting) blocks as elements. The generalization of this theorem to in nite dimensions is widely used in math and science. If is an eigenvalue of A, then: 1. the vector[V,D] = eig(A) returns matrices V and D. kastatic. In Example 1, the eigenvalues of this matrix were found to be λ = −1 and λ = −2. , v is an eigenvector of T if T(v) is a scalar multiple of v . , a matrix equation) that are sometimes also known as characteristic Does the order matter when you use the equation: Det(lambda x I-A)=0? Because in my linear algebra textbook they just have the order of the equation flipped FINDING EIGENVALUES. The figure indicates that the eigenvalues of A are very sensitive to perturbations in the matrix entries. is an eigenvalue of A m, for 2. Just type matrix elements and click the button. Eigenvalues and Eigenvectors calculation in just one line of your source code. However, in Example ESMS4, the matrix has only real entries, but is also symmetric, and hence Hermitian. The eigenvectors for D 1 (which means Special properties of a matrix lead to special eigenvalues and eigenvectors. Returns two objects, a 1-D array containing the eigenvalues of a , and a 2-D square array or matrix (depending on the input type) of the corresponding eigenvectors (in columns). Let X be an eigenvector of A associated to . This paper develops the necessary tools to understand the re-lationship between eigenvalues of the Laplacian matrix of a graph and the connectedness of the graph. The list contains each of the independent eigenvectors of the matrix, supplemented if necessary with an appropriate number of vectors of zeros. 1 Strictly Diagonally Dominant Matrices Before we get to Gershgorin’s Theorem it is convenient to introduce a condition for matrices known as Strictly Diagonally Dominant. SOLUTION: • In such problems, we first find the eigenvalues of the matrix. diagonalization of matrix A leaving the eigenvalues of A on the diagonal. It 3. Let me use det. [V,D] = eig(A) returns matrix V, whose columns are the right eigenvectors of A such that A*V = V*D. In the case where A has n real and distinct eigenvalues, we have already solved the system by using the solutions eλitv. In this lesson, we're going learn how to find the eigenvalues of a given matrix. So by Theorem HMRE, we were guaranteed eigenvalues that are real numbers. As varies from 0 to 1, B( ) has entries that vary continuously from B(0) = Dto B(1) = A. If A is invertible, then is an eigenvalue of A-1. In this section we will introduce the concept of eigenvalues and eigenvectors of a matrix. (The Ohio State University Linear Algebra Exam Problem)Add to solve later. This matrix calculator computes determinant, inverses, rank, characteristic polynomial, eigenvalues and eigenvectors. ij stands for the (i,j) minor of A, namely the (n − 1) × (n − 1) matrix obtained from A by erasing the i-th row and the j-th column. The diagonal matrix D contains eigenvalues. That is a major theme of this chapter (it is captured in a table at the very end). Nguy˜ˆen † Abstract We study the question of estimating the eigenvalues of a matrix in the streaming model, addressing a question posed in [Mut05]. 8 in [BD] It may happen that a matrix has some “repeated” eigenvalues. 2 FINDING THE EIGENVALUES OF A MATRIX Consider an n£n matrix A and a scalar ‚. Consider B( ) = A+ (1 )D, where D= diag(A), the diagonal matrix whose diagonal entries are those from A. The Jordan form yields some explicit information about the form of the solution on the initial value problem [4 ] which, according to the Fundamental Solution Theorem, is given by . Matrix A: Find. the eigenvalues and eigenvectors of Aare just the eigenvalues and eigenvectors of L. When the multiplicities of some of a matrix's eigenvalues of greater than 1 it is not diagonalizable but instead for any matrix A there exists an invertible matrix V such that Eigenvectors and Eigenvalues The eigenvectors of a matrix are those special vectors for which , where is an associated constant (possibly complex) called the eigenvalue. A v = w = λ v, {\displaystyle Av=w=\lambda v,} (1) then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. Introduction Spectral graph theory has a long history. 1 A Graph-Theoretic Interpretation Instead of directly using the characteristic polynomial p n(x) = det(xI n) Most of the algorithms for eigen value computations scale to big-Oh(n^3), where n is the row/col dimension of the (symmetric and square) matrix. for a basis in which The University of Warwick Department of Statistics Largest eigenvalues and sample covariance matrices Andrei Bejan MSc Dissertation September 2005 Supervisor: Dr. Repeated eigenvalues If A has repeated eigenvalues, it may or may not have n independent eigen vectors. Example: Find the eigenvalues and associated eigenvectors of the matrix A = 2 −1 1 2 . In matrix theory, the trace of a matrix is the sum of its eigenvalues and the determinant is the product of its eigenvalues. An × matrix gives a list of exactly eigenvalues, not necessarily distinct. Usage eigen(x, symmetric, only. and so the matrix has complex eigenvalues. Second Order Equations as Systems 1 Complex Eigenvalues We know that to solve a system of n equations (written in matrix form as x′ = Ax), we must find n linearly independent solutions x1,,xn. Note: The two unknowns can also be solved for using only matrix manipulations by starting with the initial conditions and re-writing: Now it is a simple task to find γ 1 and γ 2. Find the eigenvalues and eigenvectors for 21 01 A Matrices and Eigenvectors It might seem strange to begin a section on matrices by considering mechanics, but underlying much of matrix notation, matrix algebra and terminology is the need to describe the physical world in terms of straight lines. The (black) circle shows the eigenvalues λ = −0. For examples, consider the diagonal matrix discussed above and the reflection matrix below: technique for computing the eigenvalues and eigenvectors of a matrix, converging superlinearly with exponent 2 + . The eigen value and eigen vector of a given matrix A, satisfies the equation Ax = λx , …Eigenvalues and Eigenvectors. Generally speaking, eigenvalues of a square matrix are roots of the so-called characteristic polynomial: That is, start with the matrix and modify it by subtracting the same variable from each diagonal element. However, ker(B I 2) = ker 0 2 0 0 = span( 1 0 ): Motivated by this example, de ne the geometric multiplicity of an eigenvalue Note that a square matrix of size always has exactly eigenvalues, each with a corresponding eigenvector. Similarly, An = PDnP 1. I How dicult is this? Eigenvalues are the roots of the characteristic polynomial. Eigenvectors of a matrix are vectors whose direction remains unchanged after multiplying by the matrix. is called the dominant eigenvalueof A if The eigenvectors corresponding to are called 1 dominant eigenvectorsof A. 1 = 2+i and λ. Since the system has two equations in two variables, 𝐴 is a “2×2” matrix, or a square 18. , X is an orthogonal matrix. Matrices and Eigenvectors It might seem strange to begin a section on matrices by considering mechanics, but underlying much of matrix notation, matrix algebra and terminology is the need to describe the physical world in but also involves finding eigenvalues and eigenvectors of matrices. Eigenvectors and Eigenvalues Eigenvalues/vectors are instrumental to understanding electrical circuits, If a matrix has complex eigenvalues, its sequence When all the eigenvalues of a symmetric matrix are positive, we say that the matrix is positive definite. Tool to calculate eigenvectors of a matrix. If you're behind a web filter, please make sure that the domains *. n maths physics one of the particular values of a certain Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix. Live TV from 60+ channels. In theory we know what to do. Tác giả: Maths with JayLượt xem: 301KEigenvalues and Eigenvectors - MIT Mathematicsmath. Of particular interest in many settings (of which differential equations is one) is the followingProperties of real symmetric matrices I Recall that a matrix A 2Rn n is symmetric if AT = A. Eigenvalues and Eigenvectors can conclude that the eigenvalues of a matrix depend continuously on the entries of the matrix. 1 Eigenvalues and Eigenvectors Eigenvalue problem: If A is an nvn matrix, do there exist nonzero vectors x in Rn such that Ax is a scalar multiple of xMatrix calculator Solving systems of linear equations Determinant calculator Examples Eigenvalues calculator Wikipedia:Matrices. In this equation A is an n-by-n matrix, v is a Feb 27, 2014 Thanks to all of you who support me on Patreon. Eigenvalues are a special set of scalars associated with a linear system of equations (i. More: Diagonal matrix Jordan decomposition. There is a simple connection between the eigenvalues of a matrix and whether or not the matrix is nonsingular. "The factor with the largest eigenvalue has the most variance and so on, down to factors with small or negative eigenvalues that are usually omitted from solutions" (Tabachnick and Fidell, 1996, p. To calculate the eigenvectors of a sparse matrix, or to calculate the eigenvalues of a sparse matrix that is not real and symmetric, use the eigs function. values = FALSE, EISPACK = FALSE) Arguments As I mentioned before, assuming that the multiplicity of the eigenvalue lamda sub one is exactly one. EIGENVALUES OF THE LAPLACIAN AND THEIR RELATIONSHIP TO THE CONNECTEDNESS OF A GRAPH ANNE MARSDEN Abstract. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. 1 Eigenvalues and Eigenvectors The product Ax of a matrix A ∈ M n×n(R) and an n-vector x is itself an n-vector. eig, use linalg. Eigenvalues and eigenvectors can be complex-valued as well as real-valued. However, when complex eigenvalues are encountered, they always occur in conjugate pairs as long as their associated matrix has only real entries. If x is an eigenvector of A. The techniques used here are practical for $2 \times 2$ and $3 \times 3$ matrices. 7 Eigenvalues, eigenvectors, diagonalization 7. The generalized eigenvalues of m with respect to a are those for which . Since this is a Laplacian matrix, the smallest eigenvalue is $\lambda_1 = 0$. The characterization of the eigenvalues of a symmetric matrix as constrained maxima of the Rayleight quotient lead to the following results about the eigenvalues of a perturbed symmetric matrix. 2 Inverse iteration The inverse iteration method is a natural generalization of the power iteration method. 1 Let A be an n × n matrix. ) Once the eigenvalues of a matrix (A) have been found, we can find the eigenvectors by Gaussian Elimination. Let A be a square matrix of order n and one of its eigenvalues. Examples of Problems using Eigenvalues Problem: If is an eigenvalue of the matrix A, prove that 2 is an eigenvalue of A2. n. There is a converse to the above correspondence between the eigenvalues of a In Example 1, the eigenvalues of this matrix were found to be λ = −1 and λ = −2. Definition C. linalg. org are unblocked. Matrix Eigenvalue Theory It is time to review a little matrix theory. Solving for a matrix in an equation with trace. Likewise this fact also tells us that for an $$n \times n$$ matrix, $$A$$, we will have $$n$$ eigenvalues if we include all repeated eigenvalues. This calculator allows to find eigenvalues and eigenvectors using the Characteristic polynomial. Therefore, there are nonzero vectors x such that A x = x (the eigenvectors corresponding to the eigenvalue λ = −1), and there are nonzero vectors x such that A x = −2 x (the eigenvectors corresponding to the eigenvalue λ = −2). Tác giả: Sal KhanFind All the Eigenvalues of 4 by 4 Matrix – Problems in https://yutsumura. We formally define an eigenvalue of a matrix 13 Tháng Mười Một 200914 Tháng Mười Một 2009Eigenvalue. A matrix with no negative entries can still have a negative eigenvalue!5 Numerical computation of eigenvalues The second basic problem of linear algebra is to compute the eigenvalues and eigenvectors of a square matrix. Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Diagonalization Symmetric Matrices and Orthogonal Diagonalization 16- 2 7. Voorbeeld van reële matrix met niet-reële eigenwaarden. prove that in a 2×2 triangular matrix the eigenvalues are on the principal diagonal. Hide Ads Show Ads. Complex eigenvalues The matrix Q = 0 −1 rotates every vector in the plane by 90 . 1 > i, i 2, . 6. We will again be working with square matrices. Chapter 5 Eigenvalues and Eigenvectors 1 Eigenvalues and Eigenvectors 1. EIGENVALUES AND EIGENVECTORS Definition 7. EigenValues is a special set of scalar values, associated with a linear system of matrix equations. Eigenvalue definition is - a scalar associated with a given linear transformation of a vector space and having the property that there is some nonzero vector which when multiplied by the scalar is equal to the vector obtained by letting the transformation operate on the vector; especially : a root of the characteristic equation of a matrix. patreon. 144). Algebraic meth-ods have proven to be especially e ective in …The Eigenvalues(A) command solves the simple eigenvalue problem by returning the eigenvalues of Matrix A in a column Vector. The eigen value and eigen vector of a given matrix A, satisfies the equation Ax = λx , where, λ is a number, also called a scalar. The remainder of this chapter will discuss eigenvalues and eigenvectors, and the ways that they affect their respective systems. where I is the identity matrix. 4/17/2012 · Eigenvalues of the product of two matrices Page 1 of 2 1 2 Next > Mar 18, 2012 #1. Its only real eigenvector is the zero vector; any other vector’s direction changes when it is multiplied by Q. 7 Multiple eigenvalues. 12 Tháng Chín 2016Sometimes such a matrix will have eigenvalues which are critically important in applications to linear operators. The eigenvalues of a matrix are closely related to three important numbers associated to a square matrix, namely its trace, its deter-minant and its rank. Computes eigenvalues and eigenvectors of numeric (double, integer, logical) or complex matrices. λ 1 =-1, λ 2 =-2. Suppose that is a real symmetric matrix of dimension . For an matrix we get a polynomial of degree . Introduction Gauss-Jordan reduction is an extremely efiective method for solving systems of linear equations, but there are some important cases in which it doesn’t work very well. In the early days, matrix theory and linear algebra were used to analyze adjacency matrices of graphs. An n x n matrix will have n eigenvalues. Eigenvalues and Eigenvectors. The Eigenvalues of a Matrix - Mathonline mathonline. square roots of the eigenvalues of the associated square Gram matrix K = ATA, which are called singular values of the original matrix. Suppose A is an matrix and is a eigenvalue of A. Eigenvalues, Eigenvectors, and Diagonal-ization Math 240 Eigenvalues and Eigenvectors Diagonalization Complex eigenvalues Find all of the eigenvalues and eigenvectors of A= 2 6 3 4 : The characteristic polynomial is 2 2 +10. If A is real symmetric, then the right eigenvectors, V, are orthonormal. It has 1 0 trace 0 = λ1 + λ2 and determinant 1 = λ1 · λ2. The three zeros of this cubic polynomial are , so this matrix has three distinct eigenvalues. Eigenvalues and Eigenvectors calculation is just one aspect of matrix algebra that is featured in the new Advanced edition of Matrix ActiveX Component (MaXC). com/help/matlab/ref/eig. com/youtube?q=eigenvalues+of+a+matrix&v=j2B_vcp3tUQ Sep 12, 2016 Edexcel FP3 June 2015 Exam Question 3a. With complex eigenvalues we are going to have the same problem that we had back when we were looking at second order differential equations. Free Matrix Eigenvalues calculator - calculate matrix eigenvalues step-by-stepFINDING EIGENVALUES AND EIGENVECTORS EXAMPLE 1: Find the eigenvalues and eigenvectors of the matrix A = 1 −3 3 3 −5 3 6 −6 4 . net) for Bulgarian translationManuel Rial Costa for Galego translation Free matrix calculator - solve matrix operations and functions step-by-step Example: Find Eigenvalues and Eigenvectors of a 2x2 Matrix. Symmetric Matrices matrix is M(n). Detailed walkthrough of a proof that the trace of a square matrix is equal to the sum of its eigenvalues. The entries of a symmetric matrix are symmetric with respect to the main diagonal. Finding of eigenvalues and eigenvectors. The factor by which the magnitude of an eigenvector is changed by a given transformation. eigvals instead. 2 1. com/find-all-the-eigenvalues-of-4-by-4-matrixFind All the Eigenvalues of 4 by 4 Matrix (This page) Find a Basis of the Eigenspace Corresponding to a Given Eigenvalue Diagonalize a 2 by 2 Matrix if DiagonalizableThe matrix S has the real eigenvalue as the first entry on the diagonal and the repeated eigenvalue represented by the lower right 2-by-2 block. This is true for any triangular matrix, but is generally not true for matrices that are not triangular. Suppose that A is a square matrix of size n, x≠0 is a vector in Cn, and λ is a scalar in C. The distance to normality also depends on the difference between the absolute values of the sub- and super-diagonal entries. Learn to find complex eigenvalues and eigenvectors of a matrix. By definition of the kernel, thatDetermining the eigenvalues of a 3x3 matrix. EIGENVALUES AND EIGENVECTORS 5 Similarly, the matrix B= 1 2 0 1 has one repeated eigenvalue 1. to row echelon form, and solve the resulting linear system by back substitution. 7 00114-7, the closed form for the eigenvalues of a tridiagonal Toepliz matrix of the form 224 CHAPTER 7. The eigenvector (1,1) is unchanged by R. In that case, Equation 26 becomes: xTAx ¨0 8x. Menu Data >Matrices, ado language >Eigenvalues of square matrices Description matrix eigenvalues returns the real part of the eigenvalues in the 1 nrow vector r and the imaginary part of the eigenvalues in the 1 nrow vector c. The eigenvalues still represent the variance magnitude in the direction of the largest spread of the data, and the variance components of the covariance matrix still represent the variance magnitude in the direction of the x-axis and y-axis. , a matrix equation) that are sometimes also known as characteristic roots, characteristic values (Hoffman and Kunze 1971), proper values, or latent roots (Marcus and Minc 1988, p. The spectral radius of , denoted by , is the maximum of the absolute values of the elements of its spectrum:The eigenvalues and eigenvectors of a matrix are scalars and vectors such that . Computing Eigenvalues and Eigenvectors We can rewrite the condition Av = v as (A I)v = 0: where Iis the n nidentity matrix. kasandbox. (λ = −2 is a repeated root of the characteristic equation. Once we have the eigenvalues for a matrix we also show how to find the corresponding eigenvalues for the matrix. The may also be referred to by any of the fourteen other combinations of: [characteristic, eigen, latent, proper, secular] + [number, root, value]. 224 CHAPTER 7. A is invertible if and only if detA 6= 0. The calculator will perform symbolic calculations whenever it is possible. 2 Matrix exponentials Definition The exponential of a square matrix A is eA = X∞ k=0 1 k! Ak. We give two Matrix calculator Solving systems of linear equations Determinant calculator Examples Eigenvalues calculator Wikipedia:Matrices. A matrix that is not diagonalizable is said to be defective. org and *. Dense linear problems and decompositions » Reference. This is called the eigendecomposition. Eigenvalues of a matrix in the streaming model Alexandr Andoni∗ Huy L. In this lesson, we're going learn how to find the eigenvalues of a given matrix. • Recall from Section 1. the eigenvalues and eigenvectors of Aare just the eigenvalues and eigenvectors of L. 2/27/2014 · Finding Eigenvalues and Eigenvectors : 2 x 2 Matrix Example. For example, the eigenvalues of the identity matrix are all 1, but that matrix still has n independent eigenvectors. For an n n matrix, Eigenvectors always returns a list of length n . Eigenvalues are special numbers associated with a matrix and eigenvectors are special vectors. Consider the matrix equation7 Eigenvalues, eigenvectors, diagonalization 7. Online Matrix Calculator Powered by . ) (This is true, for example, if A has n distinct eigenvalues. The second smallest eigenvalue of a Laplacian matrix is the algebraic connectivity of the graph. Eigenvalues for the matrix $$M$$ are $$\lambda_1 = 5$$ and $$\lambda_2 = -1$$ (see tool for calculating matrices eigenvalues). Why? It is easy to see if we remember from the previous section that the determinant is a sum over products of matrix elements. 5 Numerical computation of eigenvalues The second basic problem of linear algebra is to compute the eigenvalues and eigenvectors of a square matrix. kastatic. eigenvalues of a matrixGiven a particular eigenvalue λ of the n by n matrix A, define the set E to be Eigenvalues are a special set of scalars associated with a linear system of equations (i. Of particular interest in many settings (of which differential equations is one) is the following Calculator of eigenvalues and eigenvectors Matrix calculator العَرَبِيَّة Български Čeština Deutsch English Español فارسی Français Galego Italiano 日本語 Македонски Nederlands Norsk Polski Português Română Русский Türkçe Українська Tiếng việt 中文(繁體) the eigenvalues and eigenvectors of Aare just the eigenvalues and eigenvectors of L. I think you can get bounds on the modulus of the eigenvalues of the product. Then the QR method is used to find all eigenvalues of the tridiagonal matrix. 9 that an nxn matrix is nonsingular iff it is equivalent to I (also this is true if the determinant is zero, which we will study in section 4. Show that eigenvalues of a Hermitian matrix $A$ are real numbers. Spectral Decomposition of a Matrix Description. bluebit. I To show these two properties, we need to consider complex matrices of type A 2Cn n, where C is the set ofA Hermitian matrix $(\textbf{A}^\ast = \textbf{A})$ has only real eigenvalues - Proof Strategy [Lay P397 Thm 7. eig, use linalg. Let us rearrange the eigenvalue equation to the form , where represents a vector of all zeroes (the zero vector). If you're seeing this message, it means we're having trouble loading external resources on our website. edu/~gs/linearalgebra/ila0601. So, any scalar multiple of an eigenvector is also an eigenvector for the given eigenvalue . • Exercise 2. 11) express the same transformation, but in di erent bases). Theorem Let Abe a square matrix The eigenvalues of an upper triangular matrix (including a diagonal matrix) are the entries on the main diagonal Proof : a) By definition, each eigenvalue is a root of the characteristic equation det( A – λI ) = 0. The result is a 3x1 (column) vector. com/patrickjmt !! Thanks to all of Find Eigenvalues of 3x3 Matrix - YouTube www. Properties of real symmetric matrices I Recall that a matrix A 2Rn n is symmetric if AT = A. We have to compute the characteristic polynomial p( ), nd the roots i,thenforeachiwe have the equation ( A−Eigenvalues and eigenvectors have many applications in both pure and applied mathematics. They may also be referred to by any of the fourteen other combinations of: [characteristic, eigen, latent, proper, secular] + [number, root, value]. 646). Section 5-8 : Complex Eigenvalues. Note: 1 or 1. 1, 2, . Every time we compute eigenvalues and eigenvectors we use this format, which can also be written as det(A - lambda vec(I)) =0, where I is the Identity matrix vec(I)=((1, 0), (0, 1)). DETERMINANTS AND EIGENVALUES 1. That is, the determinant of A Imust equal 0. It can also be termed as characteristic roots, characteristic values, proper values, or latent roots. The roots of this polynomial are λ. Now consider the problem of finding the eigenvectors for the eigenvalues λ 1 and λ 2. I For real symmetric matrices we have the following two crucial properties: I All eigenvalues of a real symmetric matrix are real. A. Eigenvalues and Eigenvectors Projections have D 0 and 1. To find eigenvalues of a matrix all we need to do is solve a polynomial. kasandbox. In particular, this means that the product of the eigenvalues of A nis M(n). Could someone please explain to me the steps? 1 -1 0 1 5 3 0 0 0 0 4 1 0 0 -1 4 i got 1, 8, 4, 4 1/2 as the eigenvalues when i created the upper triangle matrix and took the values off the show more I am having a very difficult time finding the eigenvalues and vectors of the following matrix. While Gershgorin’s Theorem can be The eigenvalues then still correspond to the spread of the data in the direction of the largest variance, whereas the variance components of the covariance matrix still defines the spread of the data along the axes: The product of the eigenvalues of a matrix is equal to the determinant of the matrix. In Example CEMS6 the matrix has only real entries, yet the characteristic polynomial has roots that are complex numbers, and so the matrix has complex eigenvalues. In other words, to diagonalize a square matrix. Then since A is diagonalizable, the dimension of the eigenspace corresponding to that eigenvalue, must also be one. If A is an invertible matrix with real, 1 Eigenvalues and eigenvectors of matrices A = namely the (n − 1) × (n − 1) matrix obtained from A by erasing the i-th row and the j-th column. The second examples is about a 3*3 matrix. (c)The above implies that a matrix is invertible if and only if none of its eigenvalues is zero. Eigenvalues and eigenvectors can be complex-valued as well as real-valued. It is We have some properties of the eigenvalues of a matrix. An × matrix gives a list of exactly eigenvalues, not necessarily distinct. Performs a complex Schur decomposition of a real or complex square matrix. λ= 0: We want x= (x 1,x 2) such that 2 6 1 3 −0 1 0 0 1 x 1 x 2 = 0 0 The coefficient matrix of this Eigenvalues of graphs L¶aszl¶o Lov¶asz November 2007 Contents is an orthogonal matrix and D is a diagonal matrix. pdf · PDF tệpThe only eigenvalues of a projection matrix are 0 and 1. io/ev/eigenvectors-and-eigenvaluesEigenvalues/vectors are instrumental to understanding electrical circuits, mechanical systems, ecology and even Google's PageRank algorithm. ) Suppose A is diagonalizable with independent eigenvectors and corresponding eigenvalues . Consider the 2 2 matrix A= " 1 3 3 1 #: First, this matrix corresponds to a linear transformation T: R2!R2 defined by T(x) = Ax for any numpy. 10) and (5. eigenvalue - (mathematics) any number such that a given square matrix minus that number times the identity matrix has a zero determinant. Steiger (Vanderbilt University) Eigenvalues, Eigenvectors and Their Uses 2 / 23Or copy & paste this link into an email or IM:Theorem If A is an matrix and is a eigenvalue of A, then the set of all eigenvectors of , together with the zero vector, forms a subspace of . Solution: Since is an eigenvalue of A, Av = v for some v 6=0. 2 examples are given : first the eigenvalues of a 4*4 matrix is calculated . Our ultimate goal is to prove the following theorem. We show that the eigenvalue “heavy hitters” of a matrix can be computed in a single pass. When the matrix is large, the matrix A is typically factored as a product of 3 matrices A=U*D*V where D is diagonal and its elements are the eigenvalues of A, and U and V have nice properties. com/academy/lesson/how-to-determine-the-eigenvaluesHow many eigenvalues a matrix has will depend on the size of the matrix. That’s generally not too bad provided we keep $$n$$ small. eigvals, returns only the eigenvalues. The eigenvalues and eigenvectors of the system matrix play a key role in determining the response of the system. 1 Eigenvalues and Eigenvectors The product Ax of a matrix A ∈ M n×n(R) and an n-vector x is itself an n-vector. Because equal matrices have equal dimensions, only square matrices can be symmetric. 10) and (5. The eigenvalue specifies the size of the eigenvector. Free Matrix Eigenvalues calculator - calculate matrix eigenvalues step-by-step Special properties of a matrix lead to special eigenvalues and eigenvectors. It is natural to search for the \best" basis for a given matrix, i. the eigenvalues of A) are real numbers. , a matrix equation) that are sometimes also known as characteristic roots, characteristic values (Hoffman and Kunze 1971), proper values, or latent roots (Marcus and Minc 1988, p. 5 The Eigenvalues of A n 5. Find more Mathematics widgets in Wolfram|Alpha. Eigenvalues and the Laplacian of a graph 1. pyplot as plt import scipy. If is a diagonal matrix with the eigenvalues on the diagonal, and is a matrix with the eigenvectors as its columns, then . The algorithm is from the EISPACK collection of subroutines. mathworks. They are associated with an eigenvalue. The eigenvalues of the 2-by-2 block are also eigenvalues …Problem 202. matri-tri-ca@yandex. You da real mvps! $1 per month helps!! :) https://www. Matrix in this example, is defined by: (4) Calculating the eigenvalues. Computes eigenvalues and eigenvectors of general complex matrices. Eigenvalues and eigenvectors have many applications in both pure and applied mathematics. If all the eigenvalues of a symmetric matrix A are distinct, the matrix X, which has as its columns the corresponding eigenvectors, has the property that X 0 X = I , i. They are used in matrix factorization, in quantum mechanics, facial recognition systems, and in many other areas. An eigenvector-eigenvalue pair of a square matrix$A$is a pair of a vector and scalar$(\bb v,\lambda)$for which$A\bb v=\lambda\bb v$. The preceding lemma shows that this is independent of the choice of basis. Definition. Spectral Decomposition of a Matrix Description. We call this subspace the eigenspace of Example Find the eigenvalues and the corresponding eigenspaces for the matrix . Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. Proof. Since the zero-vector is a solution, the system is consistent. 2 1 If A is the triangular matrix 0 2 its eigenvalues are 2 and 2. Chapter 6 Eigenvalues and Eigenvectors Example 3 The reflection matrix R = 0 1 1 0 has eigenvalues 1 and −1. Eigenvalues, Eigenvectors, and Diagonal-ization Math 240 Eigenvalues and Eigenvectors Diagonalization Complex eigenvalues Find all of the eigenvalues and eigenvectors of A= 2 6 3 4 : The characteristic polynomial is 2 2 +10. It will allow you to find the trace, determinant, eigenvalues, and eigenvectors of an arbitrary 2x2 matrix. eigsh Eigenvalues and eigenvectors are related to fundamental properties of matrices. eigenvalues of a matrix eigvalsh (a[, UPLO]) Compute the eigenvalues of a complex Hermitian or real symmetric matrix. How many eigenvalues a matrix has will depend on the size of the matrix. This is the method used in the MatLab code shown below. The determinant can tell you about the volume of a little region transformed said matrix, whether the matrix is invertible, and the integral of that determinant can tell you about the volume of a region. INSTRUCTION: Enter the following: (A) This is the 2x2 matrixEigenvalues: The calculator returns the eigenvalues of the 2x2 matrix. The eigenvectors in V are normalized so that the 2-norm of each is 1. Example 1. scipy. The numerical computation of eigenvalues and eigenvectors is a challenging issue, and must be be deferred until later. In this section we will look at solutions to $\vec x' = A\vec x$ where the eigenvalues of the matrix $$A$$ are complex. 7. We must have This is a linear system for which the matrix coefficient is . FINDING EIGENVALUES • To do this, we find the values of λ which satisfy the characteristic equation of the Example solving for the eigenvalues of a 2x2 matrix If you're seeing this message, it means we're having trouble loading external resources on our website. Revision on eigenvalues and eigenvectors The eigenvalues or characteristic root s of an N×N matrix A are the N real or complex number λi such that the equation Ax = λx has non-trivial solutions λ1, λ2, λ3, , λN . Matrix theory says that if for some matrix Band for some The determinant of A is computed by expanding along a row or a column and keep doing it until we reduce the computation to 2 × 2 determinants. In particular, weLearn to find complex eigenvalues and eigenvectors of a matrix. Eigenvalues of Symmetric Tridiagonal Matrices. Thus, for matrices larger than 4⇥4, eigenvalues cannot be computed analytically. Here I show how to calculate the eigenvalues and eigenvectors for the right whale population example from class. (1. Eigenvalues, Singular Value Decomposition Synonyms Eigenvalues = Proper Values, Auto Values, Singular Value Decomposition = Principal Component Analysis Glossary Matrix: a rectangular tableau of numbers Eigenvalues: a set of numbers (real or complex) intrinsic to a given matrix Eigenvectors: a set of vectors associated to a matrix transformation Eigenvalues and Eigenvectors calculation in just one line of your source code. This is a finial exam problem of linear algebra at the Ohio State University. The next matrix R (a reflection and at the same time a permutation) is also special. if TRUE, only the eigenvalues are computed and returned, otherwise both eigenvalues and eigenvectors …Eigenvalues module. eigenvectors and eigenvalues of symmetric matrix A are ordered in the matrices V and D in descending order, so that the rst element of D is the largest eigenvalue of • The matrix A has two eigenvalues: 1 and 3. , n n n 1 eigenvalues of all the 1000 matrices A+E so obtained are depicted by (red) dots in Figure 3. 1 of the matrix A. Powers Matrix Revisited: IfA= PDP 1thenA2 = PDP PDP 1 = PD2P 1. eigvals, returns only the eigenvalues. 5 lectures, §5. The eigenvalues of a matrix are values that allow to reduce the associated endomorphisms. Eigenvalues module. Eigenvalues are generally called lambda $$\lambda$$ and associated with an Eigenvalues and Eigenvectors of a 3 by 3 matrix Just as 2 by 2 matrices can represent transformations of the plane, 3 by 3 matrices can represent transformations of 3D space. Below each eigenvalue λ in the first row is a unit n × 1 eigenvector corresponding to λ. 4 in [EP], §7. linalg as la %matplotlib inline Definition. First we prove that a graph has k connected Eigenvalues and eigenvectors How hard are they to find? I This is a nonlinear problem. (1) Now A−λ 1I = a−λ 1 b c d−λ 1 The matrix A − λ 1I must be singular Matrix 𝐴 is the coefficient matrix, 𝑋 is the variable matrix, and 𝐵 is the constants matrix. characteristic root of a square matrix, eigenvalue of a matrix, eigenvalue of a square matrix. Beware, however, that row-reducing to row-echelon form and obtaining a triangular matrix does not give you the eigenvalues, as row-reduction changes the eigenvalues of the matrix in general. Eigenvalues and Eigenvectors. Now, in order for a non-zero vector v to satisfy this equation, A Imust not be invertible. Matrix nearness problems have received considerable attention in the literature; see A matrix M is diagonalizable if all of its eigenvalues are different; i. In fact, we will in a different page that the structure of the Section PEE Properties of Eigenvalues and Eigenvectors. Given all eigenvalues of a matrix , its trace and determinant can be obtained as The spectrum of an square matrix is the set of its eigenvalues . Simi-larly, the trace of a matrix is the sum of its eigenvalues. INTRODUCTION Let H be an arbitrary (m x m}, possibly complex, matrix. In this equation A is an n-by-n matrix, v is a was found by using the eigenvalues of A, not by multiplying 100 matrices. 2). Two proofs given Eigenvalue definition is - a scalar associated with a given linear transformation of a vector space and having the property that there is some nonzero vector which when multiplied by the scalar is equal to the vector obtained by letting the transformation operate on the vector; especially : a root of the characteristic equation of a matrix. Math 2940: Symmetric matrices have real eigenvalues The Spectral Theorem states that if Ais an n nsymmetric matrix with real entries, then it has northogonal eigenvectors. Its eigen 0 1 Eigenvalues . How to compute Eigenvalues and Eigenvectors. • If A has eigenvalues λ 1,,λ n, the eigenvalues of its exponential eA are eλ 1,,eλ n. We have to compute the characteristic polynomial p( ), nd the roots i,thenforeachiwe have the equation ( A− The hard part is finding the eigenvalues! Find the corresponding eigenvectors is the same as solving $Ax= \lamdax$ for each eigenvalue, $\lamda$. If they are numeric, eigenvalues are sorted in order of decreasing absolute value. Calculates transpose, determinant, trace, rank, inverse, pseudoinverse, eigenvalues and eigenvectors. A is not invertible if and only if is an eigenvalue of A. The eigenvalues of Aare sim- Eigenvectors corresponding to degenerate eigenvalues are chosen to be linearly independent. This matrix calculator computes determinant, inverses, rank, characteristic polynomial, eigenvalues and eigenvectors. Vincent Spruyt. Many problems present themselves in terms of an eigenvalue problem: A·v=λ·v. GG303 1/24/12 4 Eigenvalues and eigenvectors for a real symmetric 2 x 2 matrix Eigenvalues (scalars) If A is a real symmetric 2x2 matrix such that b = c, then A typical 10x10 matrix has 4 smaller positive eigenvalues, 4 smaller negative eigenvalues, and one eigenvalue whose expected value seems to be close to zero. Definition : A scalar, l, is called an eigenvalue of "A" if there is a non-trivial solution, , of . com/the-eigenvalues-of-a-matrixSometimes such a matrix will have eigenvalues which are critically important in applications to linear operators. mit. It decomposes matrix using LU and Cholesky decomposition. governs the interaction between eigenvalues. 1 The Math: Computation of Eigenvalues. Every time we compute eigenvalues and eigenvectors we use this format, which can also be written as det(A - lambda I) =0, where I is the Identity matrix I=((1, 0), (0, 1)). 286 Chapter 6 2/19/2016 · Example solving for the eigenvalues of a 2x2 matrix. Trạng thái: Đã giải quyếtTrả lời: 3Online Matrix Calculator - Bluebit Software. Recipe: a 2 × 2 matrix with a complex eigenvalue is similar to a rotation-scaling matrix. 1 The matrix A = " 1/2 1/3 1/2 2/3 # shows that a Markov matrix can have complex eigenvalues and that Markov matrices can be orthogonal. If v is an eigenvector for AT and if w is an eigenvector for A, and if the corresponding eigenvalues are di erent, then v and w must be orthogonal. All that's left is to find the two eigenvectors. Then we say x is an eigenvector of A with eigenvalue λ if Ax=λx Before going any further, perhaps we should convince you that such things ever happen at all. Theorem Let Abe a square matrix The two matrices must be the same size, i. Example solving …Tác giả: Sal KhanHow to Determine the Eigenvalues of a Matrix - Video https://study. 2 = 2−i; that is, the eigenvalues are not real numbers. Once we've got that down we'll practice finding eigenvalues by going Eigenvalues and Eigenvectors. An eigen-vector associated with λ 1 is a nontrivial solution ~v 1 to (A−λ 1I)~v = ~0. The rst step of the proof is to show that all the roots of the characteristic polynomial of A(i. FINDING EIGENVALUES AND EIGENVECTORS EXAMPLE 1: Find the eigenvalues and eigenvectors of the matrix A = 1 −3 3 3 −5 3 6 −6 4 . Intheexample(5)above,thetraceis0= matrix entries; the sensitivity of the eigenvalues is independent of the diagonal entry and of the arguments of off diagonal entries. NET Matrix Library Enter your matrix in the text area below: Eigenvalues, Eigenvectors and Their Uses 1 Introduction 2 De ning Eigenvalues and Eigenvectors 3 Key Properties of Eigenvalues and Eigenvectors 4 Applications of Eigenvalues and Eigenvectors 5 Symmetric Powers of a Symmetric Matrix 6 Some Eigenvalue-Eigenvector Calculations in R James H. The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. Starting with the eigenvalue equation (1), subtract λxfrom both sides to obtain Ax−λx =0 (A−λI)x =0. The eigenvalues of a hermitian matrix are real, since (λ − λ)v = (A * − A)v = (A − A)v = 0 for a non-zero eigenvector v. I Must use an iterative It's often worth checking that the sum of the eigenvalues is equal to the trace of A. If . The matrix Dis diagonal and the matrix Ehas entries that are su ciently small that the diagonal entries of Dare reasonable approximations to the eigenvalues of A. If analyzing matrices gives you a headache, this eigenvalue calculator 2x2 is a perfect tool for you. Let be a linear transformation, where V is a finite-dimensional vector space. In matrix theory, the trace of a matrix is the sum of its eigenvalues and the determinant is the product of its eigenvalues. Understand the geometry of 2 × 2 and 3 × 3 matrices with a complex eigenvalue. Let$A$be a square matrix. In this video I outline the general procedure for finding eigenvalues and eigenvectors for an n x n matrix and work an example using a Tác giả: patrickJMTLượt xem: 1. gr/matrix-calculatorMatrix Calculator. Linear www. Performs LU, Cholesky, QR, Singular value decomposition. Syntax matrix eigenvalues r c = A where A is an n nnonsymmetric, real matrix. In the following sections we will determine the eigenvectors and eigenvalues of a matrix , by solving equation . With help of this calculator you can: find the matrix determinant, the rank, raise the matrix to a power, find the sum and the multiplication of matrices, calculate the inverse matrix. Another important use of eigenvalues and eigenvectors is diagonalisation, and it is to this that we now turn. De matrix [−] beschrijft als reële matrix een draaiing van het reële vlak over een hoek van 90 graden, en heeft bij …Lecture 33: Markovmatrices A n × n matrix is called a Markov matrixif all entries are nonnegative and the sum of each column vector is equal to 1. Let$ \lambda_1 \le \lambda_2 \le \lambda_3 \le \lambda_4 \$ be the eigenvalues of this matrix. 1MEigenvalues and eigenvectors - MATLAB eighttps://www. 1 Overview of diagonalizations We have seen that a transformation matrix looks completely di erent in di erent bases (the matrices (5. λ= 0: We want x= (x 1,x 2) such that 2 6 1 3 −0 1 0 0 1 x 1 x 2 = 0 0 The coefficient matrix of this We prove that eigenvalues of a Hermitian matrix are real numbers. eigenvalues & eigenvectors Definition : An eigenvector of an n x n matrix, "A", is a nonzero vector, , such that for some scalar, l. Ask Question 12. In this case: 1 + (3 + √15) + (3 - √15) = 1 + 4 + 2 b) Since all of the eigenvalues are distinct, the eigenvector corresponding to the eigenvalue λ can be easily obtained using the equation: Av = λv Example: Using the eigenvalue λ = 1, we find: Av = v An matrix A is diagonalizable if it has n independent eigenvectors. of size $$m \times m$$, the eigenvalues of $$M$$ are the roots of the characteristic polynomial $$P$$ of the matrix $$M$$. 1 Let A be an n × n matrix. There are very short, 1 or 2 line, When I test the method for a matrix with known eigenvalues, m needs to be large enough to get good approximations of the eigenvalues of the A set of block eigenvalues of a matrix is a complete set if the set of all the eigenvalues of these block eigenvalues is the set of eigenvalues of the matrix [1]. eigvals (a) Compute the eigenvalues of a general matrix. Using MatLab to find eigenvalues, eigenvectors, and unknown coefficients of initial value problem. These straight lines may be the optimum axes for describing rotation of a what range the eigenvalues of a certain matrix would be in we can use Gershgorin’s Theorem. corresponding to and k is any scalar, then . Examples of eigenvalue in a Sentence Recent Examples on the Web By contrast, the Tracy-Widom curve appears to arise from variables that are strongly correlated, such as interacting species, stock prices and matrix eigenvalues . A matrix M is diagonalizable if all of its eigenvalues are different; i. 2. The eigenvalues of A are just the diagonal entries of D. ask. So information about the eigenvalues could be used to obtain information on M(n). The determinant can tell you about the volume of a little region transformed said matrix, whether the matrix is invertible, and the integral of that determinant can tell you about the volume of …Find the eigenvalues for a square matrix A. and the two eigenvalues are . Assembling the eigenvectors column-wise into a matrix X, the eigenvector equations can be written AX=X where is a diagonal matrix with eigenvalues on the diagonal. If A is real, there is an orthonormal basis for R n consisting of eigenvectors of A if and only if A is symmetric. The columns of V present eigenvectors of A. In that case A−1 = 1 j are simple complex conjugate eigenvalues of A, Example 1: Determine the eigenvectors of the matrix. The matrix is almost always invertible, in which case we have . 0. We formally define an eigenvalue of a matrix Jun 22, 2018 Together we'll learn how to find the eigenvalues of any square matrix. 3c] 1 linear algebra characteristic polynomial, matrix rank, Matrix similarityComputation of Eigenvectors. In finding the eigenspace, this comes down to finding the solution space of the matrix that I referred to in the above paragraph. Example solving for the eigenvalues of a 2x2 matrix. (This is true, for example, if A has n distinct eigenvalues. Computation of det(A - lambda vec(I)) =0 leads to the Characteristic Polynomial, where the roots of this polynomial are the eigenvalues of the matrix A. We define the characteristic polynomial and show how it can be used to find the eigenvalues for a matrix. Get the free "Eigenvalues Calculator 3x3" widget for your website, blog, Wordpress, Blogger, or iGoogle. It is possible for a real or complex matrix to have all real eigenvalues without being hermitian. Wigner proposed to study the statistics of eigenvalues of large random matrices as a model for the energy levels of heavy nuclei. Eigenvectors and eigenvalues However, if the covariance matrix is not diagonal, such that the covariances are not zero, then the situation is a little more complicated. For computing eigenvalues and eigenvectors of matrices over floating point real or complex numbers, the matrix should be defined over RDF (Real Double Field) or CDF (Complex Double Field), respectively. 1) The first row of the output consists of the real eigenvalues of the square matrix A corresponding to the data in R1. Define Eigenvalues. i. 1 Eigenvalues and Eigenvectors Eigenvalue problem: If A is an nvn matrix, do there exist nonzero vectors x in Rn such that Ax is a scalar multiple of x The eigenvalues of a matrix are the roots of its characteristic equation. The orthogonal matrix R0= HG 0 G n 1 has columns that are reasonable approximations to the eigenvectors of A. It is Determining the eigenvalues of a 3x3 matrix. Now, if are both eigenvectors of A corresponding to , then . Those because we only need the determinant of a 2 by 2 matrix. Jonathan WarrenSince every linear operator is given by left multiplication by some square matrix, finding the eigenvalues and eigenvectors of a linear operator is equivalent to finding the eigenvalues and eigenvectors of the associated square matrix; this is the terminology that will be followed. Permutations have all j jD1. Background for QR Method Suppose that A is a real symmetric matrix
|
2019-07-18 23:55:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9055850505828857, "perplexity": 366.89133522878967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525863.49/warc/CC-MAIN-20190718231656-20190719013656-00214.warc.gz"}
|
http://isotopicmaps.org/pipermail/tmql-wg/2006-November/000326.html
|
# [tmql-wg] New TMQL proposal
Robert Barta rho at bigpond.net.au
Mon Nov 13 22:46:59 EST 2006
On Mon, Nov 06, 2006 at 12:05:21PM +0100, Pawe? G?siorowski wrote:
> Hi Robert,
>
> Thank you for looking at my work.
> I tried to answer your comments and put them below.
> Very important feature of DTMQL is that it allows to query TMs, RDFs
> and DB tables in a single query. Other thing is that its framework is a RDB.
> I am not sure if it sounds clear so if you have questions please
[ I am cc'ing the tmql mailing list. ]
For the others, this is about the TMQL use case solutions solved
with DTMQL:
> I have put the UC solutions at topicmaps.axent.pl
> From: Robert Barta [mailto:rho at bigpond.net.au]
> > Your use case solutions also underline why we tried to move away from
> > having predicates for _everything_. So a
> >
> > abstract ($D,$A)
> >
> > becomes in TMQL
> >
> > $D / abstract > > > > especially for occurrences this is more easier to the eyes. For 19 > > TMQL would allow to write > > > > // tutorial ++ // paper > > > > or SQL-ish > > > > select$d
> > where $d isa tutorial |$d isa paper
> >
> > or ...
> >
> > Nr 23 would only become // * / email
>
> In DTMQL one can also query items using 'isa' predicate:
>
> SELECT "$P" > FROM dtmql { instanceOf($P,person) } a;
That was not the point I was trying to make.
The way I see it, is that (a) a TM has a lot of graph-like connections
between the predefined items (topic, assocs, ...) and that these
connections run along certain axes: from assoc to players (or back)
from topic to types (and back), and so on.
If you offer a query language, then there are two choices to look at
these axes:
- unbiased: use variables (bound and unbound) and try to match
This is what tolog does and DTMQL.
- biased: a 'current position' is assumed and a particular axis
is followed from there.
This is what TMQL _also_ does.
Semantically, both approaches are equivalent, but with longer paths I
find
axis1 ($a,$b) and
axis2 ($b,$c) and
axis3 ($c,$d)
not so readable as
$a axis1 axis2 axis3 especially when$b and $c are never needed. Which is quite often the > > The other problem I have with 'predicate only' is that such an > > approach has NOTHING to do with TMs. You could switch to, say, UML > > metamodelling and use exactly the same language structure. > I tried to make DTMQL not strictly dedicated so that I can perform queries > on TMs and RDFs in a single query. It may be treated as an extension to SQL > which allows to query TMs in a much simpler way compared to SQL only. > This was important for me as I am designing a CMS software using TMs, RDF > and RDB tables. The structure of DTMQL allows to query such structures but > only if they are stored in RDB. That choice is perfectly ok for a product. For a _standard_ I would argue that it is too limiting. Consider (my favourite example :-) a query over the DNS: http://www.idealliance.org/papers/dx_xmle04/slides/barta/foil14.html > > I am also unsure how you treat the situation where one has to > > distinguish between the 'occurrence item' and only the value > > itself. This has caused some headache here. > > This one I am still thinking over. Current concept is : > > SELECT "$A" FROM dtmql {...} a; returns :
> - object identifier if $A is an object > - string if$A is a value
>
> To extract a value from occrrence object one may :
> - return object and use a proper function from 'tmo_methods' schema
> - or return a value by using proper built-in predicates in the DTMQL block.
Yes, but this is all manual work. Which the TMQL processor can do,
and can do much faster than actually calling an (in general) arbitrary
function. In the TMQL expression
select $p / homepage from ... where$p isa person
the processor will automatically 'atomify' the homepage, because it
assumes that this is what most people actually want. If that is not
the case then the shortcut "/ homepage" has to be replaced with
$p characteristics :>: homepage [ or whatever syntax we end up with ]. > > Also see how TMQL resolves 11 without having to resort to subqueries: > > > select$a
> > where
> > is-author-of ($a,$d)
> > & not is-author-of ($a',$d)
>
> I think this one shows a difference between treating the negation.
> In the above query there is a problem with the "$a'" variable. > For example the expression "not is-author-of($a',$d)" is true for$a' :
>
> - topic[@id = puccini]
> - occurrence
> - "Bla Bla Bla"
> etc.
If for 'puccini' there is no 'is-author-of' association, then, yes, the
predicate "not is-author-of ($a',$d)" is true.
> So probably it should return infinite set of items.
Every map is finite. We have a closed world here.
> Or should it return items that make the predicate true but only the ones
> that exist in the TM.
Yup, that is our current position. Which is non-Semantic-Web-ish, I guess.
> To solve that DTMQL only allows negation if all variables in the negated
> predicate appear in positive predicates in the same alternative. So I can
> rewrite the query #11 as follows:
>
> SELECT "$A1" > FROM > dtmql { > is-author-of($A2 : author, $SOME_DOC : opus), > is-author-of($A1 : author, $D : opus), > not(is-author-of($A2 : author, $DOC : opus)), >$A1 <> $A2 > } > I am not sure if it's clear. If you have any questions please ask. Do you mean to have DIFFERENT variables$SOME_DOC, $D,$DOC? But then
they can bind to different values, right? Which is not what this query
is about (2 authors of the SAME doc).
If only \$DOC were be used, then this is _EXACTLY_ what the above TMQL
query does.
> > Case 18 has to use temporary tables. This can be a quite expensive
> > operation if only a small fraction of the TM actually contains the
> > relevant information. This is almost impossible for a SQL processor
> > to optimize because the query author has preempted optimization.
>
> This one is sill a problem as SQL allows only first order logic.
> The solution for UC#18 is temporary and I am still trying to find a way to
> query transitive relationships. For sure they are 'killing' the DB and the
> task is to find the most optimal solution.
For TMQL I would argue that how this is implemented is
"implementation" :-) Maybe it is hard to do it "on the fly" with
RDBMSs, maybe it is simple if ontological information about the
queried map is available.
TMQL does not try to mess with that.
> > Good to see your solutions as they show nicely where (and why) TMQL
> > has chosen a different path. It is also clear that DTMQL is much
> > closer to SQL and may be easier to implement if a TM is already there.
> > But again, a TM has to be in the RDB already _in a specific way_ to
> > make this work.
>
> Yes, that is true DTMQL is designed for querying TMs stored in RDBs.
Here we have it again :-))
\rho
> > -----Original Message-----
> > From: Robert Barta [mailto:rho at bigpond.net.au]
> > Sent: Saturday, October 21, 2006 6:05 AM
> > To: Pawe? G?siorowski
> > Cc: tmql-wg at isotopicmaps.org
> > Subject: Re: [tmql-wg] New TMQL proposal
> >
> > On Tue, Oct 10, 2006 at 08:31:38PM +0200, Pawe? G?siorowski wrote:
> > > I am preparing a new TMQL proposal for quering topic maps stored in
> > > relational database.
> >
> > Hi Pawel,
> >
> > Maybe it is a bit late in the game to suggest 'a new' TMQL, but any
> > input is appreciated. Please note, that TMQL (as standard) __CAN NOT__
> > assume that someone is storing his topic maps in a relational
> > database. Conceptually, topic maps can be wrapped around __ANY__ data
> > resource.
> >
> > I also assume that you are aware of
> >
> > http://topicmaps.it.bond.edu.au/docs/37/toc
> >
> > > Yet I have only documented a list of use case solutions, I would be
> > > if you could take a look at it and give me some comments.
> >
> > If you tell me/us, where this can be found...?
> >
> > \rho
> >
> > PS: This mailing list is not overly active. I only stumbled over this post
> > now.
> >
>
|
2018-11-14 11:52:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6341187357902527, "perplexity": 6788.4843044948075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741979.10/warc/CC-MAIN-20181114104603-20181114130603-00125.warc.gz"}
|
http://forums.techguy.org/web-email/416658-help-internet-stops-working-after.html
|
Search Search for: Web & EmailAll Forums
# Help - internet stops working after a few minutes
tsinvest
Junior Member with 4 posts.
Join Date: Nov 2005
14-Nov-2005, 08:36 AM #1
Internet stops working after 5 min. Windows ME
I have a Windows ME machine on which the internet connection stops working after about 5 minutes. I must reboot the machine to get another 5 minutes and then it stops again. I had AVG virus program and Zone Alarm when it happenned, then bought Norton's Internet Security program suite 2005 hoping that would fix it, no luck. Anyway I have Norton's installed, the AVG and Zone Alarm uninstalled, I ran Spybot and Adware - still no luck. I also ran Cleanup - took over 1000 files off , but the internet still cuts out. This happens with IE or Firefox.
Below is the file I get from HijackThis - I am hoping someone can help me. I will be off to work now, but I will check back later this afternoon for a hopeful reply. Thanks, Tom
ps - by the way I am sending this from my other machine which is XP and works fine. The ME machine is connected by wireless router, which I haven't had any problems with over the past year
Logfile of HijackThis v1.99.1
Scan saved at 8:14:30 AM, on 11/14/2005
Platform: Windows ME (Win9x 4.90.3000)
MSIE: Internet Explorer v6.00 SP1 (6.00.2800.1106)
Running processes:
C:\WINDOWS\SYSTEM\KERNEL32.DLL
C:\WINDOWS\SYSTEM\MSGSRV32.EXE
C:\WINDOWS\SYSTEM\MPREXE.EXE
C:\WINDOWS\SYSTEM\ATI2EVXX.EXE
C:\WINDOWS\SYSTEM\KB891711\KB891711.EXE
C:\PROGRAM FILES\COMMON FILES\SYMANTEC SHARED\CCEVTMGR.EXE
C:\PROGRAM FILES\COMMON FILES\SYMANTEC SHARED\CCSETMGR.EXE
C:\PROGRAM FILES\NORTON INTERNET SECURITY\ISSVC.EXE
C:\PROGRAM FILES\COMMON FILES\SYMANTEC SHARED\CCPROXY.EXE
C:\WINDOWS\EXPLORER.EXE
C:\WINDOWS\SYSTEM\RESTORE\STMGR.EXE
C:\WINDOWS\SYSTEM\SYSTRAY.EXE
C:\WINDOWS\SYSTEM\HIDSERV.EXE
C:\WINDOWS\SYSTEM\WMIEXE.EXE
C:\COMPAQ\CPQINET\CPQINET.EXE
C:\PROGRAM FILES\COMPAQ\EASY ACCESS BUTTON SUPPORT\BTTNSERV.EXE
C:\PROGRAM FILES\COMPAQ\DIGITAL DASHBOARD\DEVGULP.EXE
C:\CPQS\BWTOOLS\SCCENTER.EXE
C:\WINDOWS\PCTVOICE.EXE
C:\PROGRAM FILES\COMPAQ\EASY ACCESS BUTTON SUPPORT\EAUSBKBD.EXE
C:\PROGRAM FILES\ATI TECHNOLOGIES\ATI CONTROL PANEL\ATIPTAXX.EXE
C:\WINDOWS\SYSTEM\BCMWLTRY.EXE
C:\WINDOWS\SYSTEM\LEXBCES.EXE
C:\WINDOWS\SYSTEM\PRINTRAY.EXE
C:\WINDOWS\SYSTEM\RPCSS.EXE
C:\WINDOWS\SYSTEM\SPOOL32.EXE
C:\PROGRAM FILES\MUSICMATCH\MUSICMATCH JUKEBOX\MM_TRAY.EXE
C:\PROGRAM FILES\COMMON FILES\SYMANTEC SHARED\CCAPP.EXE
C:\PROGRAM FILES\COMMON FILES\SYMANTEC SHARED\CCPD-LC\SYMLCSVC.EXE
C:\WINDOWS\SYSTEM\DDHELP.EXE
C:\PROGRAM FILES\CREATIVE\MEDIASOURCE\DETECTOR\CTDETECT.EXE
C:\PROGRAM FILES\COMMON FILES\MICROSOFT SHARED\WORKS SHARED\WKCALREM.EXE
C:\PROGRAM FILES\COMMON FILES\SYMANTEC SHARED\SNDSRVC.EXE
C:\WINDOWS\SYSTEM\WBEM\WINMGMT.EXE
C:\WINDOWS\DESKTOP\HIJACKTHIS.EXE
R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://desktop.presario.net/scripts/...nsumer&LC=0409
R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Bar = http://search.presario.net/scripts/r...rchbar&LC=0409
R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://search.presario.net/scripts/r...search&LC=0409
R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Bar = http://search.presario.net/scripts/r...rchbar&LC=0409
R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://search.presario.net/scripts/r...search&LC=0409
R1 - HKCU\Software\Microsoft\Internet Explorer\Search,SearchAssistant = http://search.presario.net/scripts/r...rchbar&LC=0409
R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Window Title = Microsoft Internet Explorer
O2 - BHO: (no name) - {53707962-6F74-2D53-2644-206D7942484F} - C:\Program Files\Spybot - Search & Destroy\SDHelper.dll
O2 - BHO: Norton Internet Security - {9ECB9560-04F9-4bbc-943D-298DDF1699E1} - C:\Program Files\Common Files\Symantec Shared\AdBlocking\NISShExt.dll
O2 - BHO: NAV Helper - {BDF3E430-B101-42AD-A544-FADC6B084872} - C:\Program Files\Norton Internet Security\Norton AntiVirus\NavShExt.dll
O3 - Toolbar: &Radio - {8E718888-423F-11D2-876E-00A0C9082467} - C:\WINDOWS\SYSTEM\MSDXM.OCX
O3 - Toolbar: Norton Internet Security - {0B53EAC3-8D69-4b9e-9B19-A37C9A5676A7} - C:\Program Files\Common Files\Symantec Shared\AdBlocking\NISShExt.dll
O3 - Toolbar: Norton AntiVirus - {42CDD1BF-3FFB-4238-8AD1-7859DF00B1D6} - C:\Program Files\Norton Internet Security\Norton AntiVirus\NavShExt.dll
O4 - HKLM\..\Run: [ScanRegistry] C:\WINDOWS\scanregw.exe /autorun
O4 - HKLM\..\Run: [PCHealth] C:\WINDOWS\PCHealth\Support\PCHSchd.exe -s
O4 - HKLM\..\Run: [SystemTray] SysTray.Exe
O4 - HKLM\..\Run: [Hidserv] Hidserv.exe run
O4 - HKLM\..\Run: [CPQEASYACC] C:\Program Files\Compaq\Easy Access Button Support\cpqeadm.exe
O4 - HKLM\..\Run: [EACLEAN] C:\Program Files\Compaq\Easy Access Button Support\eaclean.exe
O4 - HKLM\..\Run: [CPQInet] c:\compaq\CPQInet\CpqInet.exe
O4 - HKLM\..\Run: [Digital Dashboard] C:\Program Files\Compaq\Digital Dashboard\DevGulp.exe
O4 - HKLM\..\Run: [Service Connection] c:\cpqs\bwtools\sccenter.exe
O4 - HKLM\..\Run: [CountrySelection] pctptt.exe
O4 - HKLM\..\Run: [PCTVOICE] pctvoice.exe
O4 - HKLM\..\Run: [ATIPTA] C:\Program Files\ATI Technologies\ATI Control Panel\atiptaxx.exe
O4 - HKLM\..\Run: [Belkin WLAN] C:\WINDOWS\SYSTEM\bcmwltry.exe
O4 - HKLM\..\Run: [LexStart] Lexstart.exe
O4 - HKLM\..\Run: [LexmarkPrinTray] PrinTray.exe
O4 - HKLM\..\Run: [MMTray] "C:\Program Files\Musicmatch\Musicmatch Jukebox\mm_tray.exe"
O4 - HKLM\..\Run: [ccApp] "C:\Program Files\Common Files\Symantec Shared\ccApp.exe"
O4 - HKLM\..\Run: [Symantec Core LC] "C:\Program Files\Common Files\Symantec Shared\CCPD-LC\symlcsvc.exe" start
O4 - HKLM\..\Run: [Symantec NetDriver Monitor] C:\PROGRA~1\SYMNET~1\SNDMON.EXE /Consumer
O4 - HKLM\..\Run: [TrojanScanner] C:\Program Files\Trojan Remover\Trjscan.exe
O4 - HKLM\..\RunServices: [*StateMgr] C:\WINDOWS\System\Restore\StateMgr.exe
O4 - HKLM\..\RunServices: [ATIPOLL] ati2evxx.exe
O4 - HKLM\..\RunServices: [ATISmart] C:\WINDOWS\SYSTEM\ati2s9ag.exe
O4 - HKLM\..\RunServices: [KB891711] C:\WINDOWS\SYSTEM\KB891711\KB891711.EXE
O4 - HKLM\..\RunServices: [ccEvtMgr] "C:\Program Files\Common Files\Symantec Shared\ccEvtMgr.exe"
O4 - HKLM\..\RunServices: [ccSetMgr] "C:\Program Files\Common Files\Symantec Shared\ccSetMgr.exe"
O4 - HKLM\..\RunServices: [ISSVC] "C:\Program Files\Norton Internet Security\ISSVC.exe"
O4 - HKLM\..\RunServices: [ccProxy] C:\Program Files\Common Files\Symantec Shared\ccProxy.exe
O4 - HKLM\..\RunServices: [ScriptBlocking] "C:\Program Files\Common Files\Symantec Shared\Script Blocking\SBServ.exe" -reg
O4 - HKCU\..\Run: [MoneyAgent] "C:\Program Files\Microsoft Money\System\Money Express.exe"
O4 - HKCU\..\Run: [Creative Detector] C:\Program Files\Creative\MediaSource\Detector\CTDetect.exe /R
O4 - Startup: Microsoft Works Calendar Reminders.lnk = C:\Program Files\Common Files\Microsoft Shared\Works Shared\wkcalrem.exe
O4 - Startup: Microsoft Office.lnk = C:\Program Files\Microsoft Office\Office\OSA9.EXE
O9 - Extra button: Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\PROGRA~1\MESSEN~1\MSMSGS.EXE
O9 - Extra 'Tools' menuitem: MSN Messenger Service - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\PROGRA~1\MESSEN~1\MSMSGS.EXE
O9 - Extra button: Translate - {06FE5D05-8F11-11d2-804F-00105A133818} - http://search.presario.net/scripts/r...c=3c00&LC=0409 (file missing)
O9 - Extra 'Tools' menuitem: AV &Translate - {06FE5D05-8F11-11d2-804F-00105A133818} - http://search.presario.net/scripts/r...c=3c00&LC=0409 (file missing)
O9 - Extra button: (no name) - {06FE5D02-8F11-11d2-804F-00105A133818} - http://search.presario.net/scripts/r...c=3c00&LC=0409 (file missing)
O9 - Extra 'Tools' menuitem: &Find Pages Linking to this URL - {06FE5D02-8F11-11d2-804F-00105A133818} - http://search.presario.net/scripts/r...c=3c00&LC=0409 (file missing)
O9 - Extra button: (no name) - {06FE5D03-8F11-11d2-804F-00105A133818} - http://search.presario.net/scripts/r...c=3c00&LC=0409 (file missing)
O9 - Extra 'Tools' menuitem: Find Other Pages on this &Host - {06FE5D03-8F11-11d2-804F-00105A133818} - http://search.presario.net/scripts/r...c=3c00&LC=0409 (file missing)
O9 - Extra button: (no name) - {06FE5D04-8F11-11d2-804F-00105A133818} - http://search.presario.net/scripts/r...c=3c00&LC=0409 (file missing)
O9 - Extra 'Tools' menuitem: AV Live - {06FE5D04-8F11-11d2-804F-00105A133818} - http://search.presario.net/scripts/r...c=3c00&LC=0409 (file missing)
O12 - Plugin for .spop: C:\PROGRA~1\INTERN~1\Plugins\NPDocBox.dll
O16 - DPF: {B49C4597-8721-4789-9250-315DFBD9F525} (IWinAmpActiveX Class) - http://cdn.digitalcity.com/radio/amp...1.11_en_dl.cab
Last edited by tsinvest; 14-Nov-2005 at 05:22 PM..
Join Date: Jan 2005
Location: London England
14-Nov-2005, 06:21 PM #2
Hi and welcome to TSG.
tsinvest
Junior Member with 4 posts.
Join Date: Nov 2005
14-Nov-2005, 06:49 PM #3
How do I get a log expert?
Do I wait for someone to read my message or do I have to do something to get a log expert? Thank you
Topazz
Member with 561 posts.
Join Date: Sep 2000
Location: New Zealand
Experience: Compulsive fiddler
14-Nov-2005, 07:09 PM #4
Yep, just sit tight and wait for now.
If you haven't received a reply by this time tomorrow post another reply with the word "Bump" in it to bring it back up the page.
tsinvest
Junior Member with 4 posts.
Join Date: Nov 2005
15-Nov-2005, 08:21 AM #5
I noticed that the problem that stops my internet connection doesn't necessarily occur only when I am on the internet. What I mean is that if I boot up the computer and just use some programs and let it run for a while, after about 10 minutes I cannot connect to the internet. So whatever is causing this occurs about 5-10minutes after the computer is booted or rebooted, whether or not I am on the internet. (I can get on the internet when I first boot up for about 5-10 min.)
Another observation is that when I am on the internet after booting up after about 5 min. the hour glass indicator will appear (as if something is going on in the background, even though I didn't click on anything)
I hope this helps someone to help me.
Thank you, Tom S.
Another Dave
Junior Member with 2 posts.
Join Date: Nov 2005
Experience: Intermediate
15-Nov-2005, 01:40 PM #6
Quote:
Originally Posted by tsinvest I noticed that the problem that stops my internet connection doesn't necessarily occur only when I am on the internet. What I mean is that if I boot up the computer and just use some programs and let it run for a while, after about 10 minutes I cannot connect to the internet. So whatever is causing this occurs about 5-10minutes after the computer is booted or rebooted, whether or not I am on the internet. (I can get on the internet when I first boot up for about 5-10 min.) Another observation is that when I am on the internet after booting up after about 5 min. the hour glass indicator will appear (as if something is going on in the background, even though I didn't click on anything) I hope this helps someone to help me. Thank you, Tom S.
I suffered from this problem (almost) exactly recently and finally fixed it last night. In my case, I continue to have access to https, but not http after 30 seconds to 10 minutes of a reboot.
Short story: I ran regcleaner and deleted all stale registry entries. Then i reinstalled zonealarm and everything works.
Longer story: My computer was crippled 2 weeks ago. Could not even get into safe mode. So I ran windows 98 setup to reinstall the OS on top of the old one (i.e. I did NOT format the hard disk). Somewhere along the way zonealarm was uninstalled. I forget why i did this and when.
I then installed the latest IE and all the critical updates. Had a few adentures (one of the updates would not install from the web, so I downloaded it and manually ran it and that worked). I also kept getting webcheck errors, so I removed that. Not sure if any of that is relevent though.
This left me with IE working for 30 seconds to 10 minutes, then http site would hang. I could get https (also MSN worked, and outlook express).
After 2 weeks of trying everything, a friend suggested ZoneAlarm was probably still partially installed and running a sentinel like application every 10 minutes. He figured I should either clean the registry out and/or reinstall ZoneAlarm and see what happens. I tried it and everything has been fine since.
Follow-up: I bought my mom a router a few months ago, and she kept complaining that her access to the internet would stop after a hour or two from reboot. It also turned out to be a ZoneAlarm setting, Sorry I forget which - but something to do with allowing external computers to ask you for your IP address.
Morale: First thing to check with internet connections problems suddenly appearing is ZoneAlarm (oh, and uninstalling ZoneAlarm doesn;t realy uninstall it, or so it appears).
tsinvest
Junior Member with 4 posts.
Join Date: Nov 2005
15-Nov-2005, 05:57 PM #7
Dave - You did it! Thank you so much. I reinstalled ZoneAlarm and then after rebooting I tried going on the internet and no problem. Next I uninstalled ZoneAlarm the proper way through their uninstall selection in the program list. All works fine now.
Many thanks, Tom S.
Join Date: Jan 2005
Location: London England
15-Nov-2005, 06:11 PM #8
Hopefully one of the guys and girls who are experts in reading logs will pick up on this thread..
Topazz
Member with 561 posts.
Join Date: Sep 2000
Location: New Zealand
Experience: Compulsive fiddler
15-Nov-2005, 06:18 PM #9
Yes, it is good news indeed. Thanks Another Dave for sharing your experience and helping to solve the problem.
Like blues_harp28 says though, there are a few entries in the HJT log that needs looking at so keep sitting tight. The log experts are pretty busy people so if your thread still hasn't been looked at by this time tomorrow bump it back up the page again by posting "Bump" in a reply.
Member with 49,014 posts.
Join Date: Sep 2004
15-Nov-2005, 06:25 PM #10
You can fix the O9 file missing entries
I am not sure about Trojan Remover
Nothing nasty in there
fightinchunk
Junior Member with 3 posts.
Join Date: Nov 2005
30-Nov-2005, 02:29 PM #11
what if your having the same problem, but don't have zonealarm but a router with a built in firewall. I have two computers hooked up to a linksys router that never gave me problems at home, but once i hook up to the school network, i get problems all the time on my desktop. my laptop works awesome, but my comp takes a long time to connect to websites, and it stops working after a while. Im comtemplating just reformatting the entire comp unless someone can help me. i'd appreaciate any help.thanks
Topazz
Member with 561 posts.
Join Date: Sep 2000
Location: New Zealand
Experience: Compulsive fiddler
30-Nov-2005, 03:13 PM #12
frustratedsailor
Junior Member with 1 posts.
Join Date: May 2008
31-May-2008, 04:14 PM #13
Fixed at last ... (I hope)
I had the same problem and after following many of these reports, I noticed that the firewall was often mentioned. I am running McAfee virus protection , and found that by pressing the 'Common Tasks/Restore Firewall Defaults' link, the problem went away.
I am posting this hoping that I will save someone else many hours of frustration.
techguy.org/416658
As Seen On
WELCOME TO TECH SUPPORT GUY!
If you're not already familiar with forums, watch our Welcome Guide to get started.
Are you having the same problem? We have volunteers ready to answer your question, but first you'll have to join for free. Need help getting started? Check out our Welcome Guide.
Search Tech Support Guy
### Find the solution to your computer problem!
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
|
2014-12-22 20:35:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.818651020526886, "perplexity": 13948.197274177073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802776563.13/warc/CC-MAIN-20141217075256-00039-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/at-what-angle-of-incidence-is-amplitude-coefficient-of-reflection-parallel-to-the-inc.291039/
|
# Homework Help: At what angle of incidence is amplitude coefficient of reflection parallel to the inc
1. Feb 9, 2009
### alchemist7
At what angle of incidence is amplitude coefficient of reflection parallel to the incidence plane 0?
my solution is that by Fresnell's equation, r//=tan(i-i)/tan(i+i)=0, i=i`,i=90.so for whatever surface,angle of incidence should be 90.
is it correct? different from the answer!
2. Feb 9, 2009
|
2018-07-21 22:12:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42268380522727966, "perplexity": 3846.167787704334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592778.82/warc/CC-MAIN-20180721203722-20180721223718-00032.warc.gz"}
|
https://brilliant.org/problems/lets-fun/
|
$\large \color{#20A900} {\text{True}}$ $\large or$ $\large \color{#D61F06}{False}$ $\large ?$
All positive prime numbers can be written as the sum of two or more perfect square numbers.
• It's not necessary that the perfect square numbers have to be distinct.
• Example: $5=2^2+1^2$
This is a Number is all around series problem.
Try out all of my problem.
×
|
2021-05-07 12:24:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5443892478942871, "perplexity": 361.6276488990514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00476.warc.gz"}
|
https://www.vedantu.com/question-answer/a-sum-of-rs-700-is-to-be-used-to-give-seven-cash-class-10-maths-cbse-5eeb43ba8a54c70190c61eba
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# A sum of Rs. 700 is to be used to give seven cash prizes to students of a school for their overall academic performance. If each cash prize is Rs. 20 less than its preceding prize, find the value of each prize.
Last updated date: 18th Mar 2023
Total views: 303.9k
Views today: 2.82k
Verified
303.9k+ views
Hint: We can assume the amount of first prize to be some positive value x. Now we can write the amount of second prize to be 20 less than x, third amount to be 20 less than the second prize and so on. Sum of all these will be equal to Rs. 700.
Let us assume the amount of first prize to be Rs. x. Now we can write the amount of other prizes to be following-
Second prize= $x-20$
Third prize=(Second prize)-20
$\begin{array}{*{35}{l}} =\left( x-20 \right)-20 \\ ~=x-2\cdot 20 \\ \end{array}$
Fourth prize=(Third prize)-20
\begin{align} & =(x-2\cdot 20)-20 \\ & =x-3\cdot 20 \\ \end{align}
Likewise, we can write the fifth, sixth and seventh prize to be $(x-4\cdot 20)$ , $(x-5\cdot 20)$ and $(x-6\cdot 20)$ .
Now the sum of all of these prizes is Rs. 700. We can write this statement as an equation which is:
$x+(x-20)+(x-2\cdot 20)+(x-3\cdot 20)+(x-4\cdot 20)+(x-5\cdot 20)+(x-6\cdot 20)=700$
On simplifying we get,
\begin{align} & 7\cdot x-(1+2+3+4+5+6)\cdot 20=700 \\ & \Rightarrow 7x-21\times 20=700 \\ \end{align}
Dividing both sides with 7 we have
\begin{align} & x-3\times 20=100 \\ & \Rightarrow x-60=100 \\ & \Rightarrow x=160 \\ \end{align}
Therefore, the first prize is Rs. 160
Now we can calculate the other prizes as follows-
First prize= $Rs.160$
Second prize= $160-20=Rs.140$
Third prize= $140-20=Rs.120$
Fourth prize= $120-20=Rs.100$
Fifth prize= $100-20=Rs.80$
Sixth prize= $80-20=Rs.60$
Seventh prize= $60-20=Rs.40$
Note: For summation of $1+2+3+4+5+6$ instead of directly adding we could have used the formula $\sum\limits_{r=1}^{n}{r}=\dfrac{n(n+1)}{2}$ . Using this formula the summation $1+2+3+4+5+6$ can be written as $\sum\limits_{r=1}^{6}{r}=\dfrac{6\times 7}{2}=21$ which we can check that it is correct.
|
2023-03-24 12:13:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999974250793457, "perplexity": 1796.1991662138241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00189.warc.gz"}
|
https://www.physicsforums.com/threads/finding-projectile-motion-range-at-an-angle.738767/
|
# Finding Projectile Motion range at an angle?
1. Feb 16, 2014
### Kavorka
The original problem has very confusing wording without the picture so I reworded it for simplicity:
A toy cannon is placed on a ramp on a hill, pointing up the hill. With respect to the x-axis the hill has a slope of angle A and the ramp has a slope of angle B. If the cannonball has a muzzle speed of v, show that the range R of the cannonball (as measured up the hill, not along the x-axis) is given by:
R = [2v^2 (cos^2 (B))(tan(B) - tan(A))] / [g cos(A)]
The base equation we've derived for projectile range on a flat surface:
R = (v^2 /g)sin(2θ)
from the parabolic equation:
y = vt + (1/2)at^2 (where v is initial velocity, and v and a are in the y-direction)
and setting y to 0.
I'm not completely sure how to correctly start this problem or how to properly take the angle of the slope of the hill into account, the trig is a bit overwhelming. Even a good shove in the right direction would help immensely!
2. Feb 17, 2014
### voko
You are looking for an intersection of a straight line with a parabola. Do you know howto find it?
3. Feb 17, 2014
### Kavorka
Yes, but were not analyzing it graphically we're analyzing it with motion equations and trig. Since I posted I was able to find an equation for time by finding the x and y-values of where the cannonball lands in terms of motion equations and relating them with the tangent of the slope of the hill, and solving for time. I then plugged T into Range = initial x velocity * time and combined the terms. My answer comes out to exactly what the original equation I want to derive is except it is over the term [g] not [g cos(A)]. I'm not sure where that cosine of the slope of the hill comes from.
4. Feb 17, 2014
### voko
You found the value of $x$ where the ball touches the hill. Well done! But you are asked to find the distance along the hill. It is very simply related with the the x-value you found. You are just a step away from the correct answer.
|
2018-01-21 07:33:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6067090630531311, "perplexity": 726.5831701004084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890314.60/warc/CC-MAIN-20180121060717-20180121080717-00315.warc.gz"}
|
http://mymathforum.com/algebra/22294-minimum-value.html
|
My Math Forum minimum value
Algebra Pre-Algebra and Basic Algebra Math Forum
November 3rd, 2011, 07:36 PM #1 Senior Member Joined: Jul 2011 Posts: 405 Thanks: 16 minimum value In a given triangle we have to prove the value of $P=DA+DB+DC$ is minimum when point $D$ lie on Centroid how can i prove it
November 4th, 2011, 02:28 AM #2 Senior Member Joined: Feb 2010 Posts: 706 Thanks: 141 Re: minimum value Assuming all angles are less than 120 degrees ... you will have a tough time proving this. In general it is not true. Try Googling "Fermat Point".
November 5th, 2011, 06:35 AM #3 Senior Member Joined: Jul 2011 Posts: 405 Thanks: 16 Re: minimum value thanks mrtwhs Now if $Q=(DA)^2+(DB)^2+(DC)^2=$Minimum. Then Point $P$ is at Centroid or not If Yes How can I prove it thanks
November 5th, 2011, 08:13 AM #4 Global Moderator Joined: Dec 2006 Posts: 20,810 Thanks: 2153 Use coordinate geometry.
November 6th, 2011, 02:25 AM #5 Senior Member Joined: Jul 2011 Posts: 405 Thanks: 16 Re: minimum value I want more light on that Method (Coordinate geometry) thanks
November 6th, 2011, 01:52 PM #6 Global Moderator Joined: Dec 2006 Posts: 20,810 Thanks: 2153 Assign Cartesian coordinates for A, B, C and D, then express Q in terms of them and use "completing the square" to determine the coordinates for D that minimize Q.
Tags minimum
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post Telu Calculus 4 August 2nd, 2013 10:00 PM dk1702 Calculus 4 May 11th, 2010 05:58 PM Axle12693 Algebra 2 February 9th, 2010 03:33 PM K Sengupta Number Theory 2 March 5th, 2009 06:12 AM ferret Calculus 5 October 30th, 2008 05:09 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
2019-07-18 13:47:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7091040015220642, "perplexity": 5800.056715772335}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525634.13/warc/CC-MAIN-20190718125048-20190718151048-00526.warc.gz"}
|
http://www.etiquettehell.com/smf/index.php?topic=83832.4740
|
News: IT'S THE 2ND ANNUAL GUATEMALA LIBRARY PROJECT BOOK DRIVE! LOOKING FOR DONATIONS OF SCIENCE BOOKS THIS YEAR. Check it out in the "Extending the Hand of Kindness" folder or here: http://www.etiquettehell.com/smf/index.php?topic=139832.msg3372084#msg3372084
• October 18, 2017, 01:08:06 PM
### Author Topic: "I'm never shopping THERE again!" Share your story! (Read 2564311 times)
0 Members and 4 Guests are viewing this topic.
#### BB-VA
• Member
• Posts: 847
##### Re: "I'm never shopping THERE again!" Share your story!
« Reply #4740 on: September 21, 2013, 06:42:08 PM »
ON a nostalgic note, who remembers the days when every fabric counter had a little machine that the fabric was fed though? As you pulled it through, the machine measured it, and at the preset stopping point, made a little notch in the fabric so it could be torn at that point.
My grandmother hated her fabric to be torn - she felt it was more likely to go off-grain than if it was cut. Every time I hear someone tearing fabric in a store I hear my grandma shrieking in outrage! :-)
Your poor grandmother. She observed correctly, that tearing fabric will sometimes result in a twisted edge...but she concluded incorrectly that the tearing was causing the problem. In truth, tearing was revealing that in the processing between the weaving and the putting the fabric on the bolt, it had become twisted, then pressed into shape. Fabric is nearly always woven straight on grain in the greige goods, and any warping or twisting occurs in the printing and processing.
And if you've ever wondered why garment turn themselves inside-out when you take them off, or in the wash? It's because they are NOT turning themselves inside out- it's that we WEAR them inside out! The garment remembers how it was sewed together, and is trying to turn itself back to the orientation in which it was sewn. Garments that have the seam on the outside are much less likely to turn themselves (unless they're jeans, which turn because they're tight!)
Exactly!!! In the scenario I described (the plant manager who allowed cutting), each time the fabric was cut off-grain, and then resewed for processing, the skew got worse and worse. The piece I described (if I remember correctly - it's been a long time) was off grain by 15 inches.
"The Universe puts us in places where we can learn. They are never easy places, but they are right. Wherever we are, it's the right place and the right time. Pain that sometimes comes is part of the process of constantly being born."
- Delenn to Sheridan: "Babylon 5 - Distant Star"
#### jayhawk
• Member
• Posts: 1275
##### Re: "I'm never shopping THERE again!" Share your story!
« Reply #4741 on: September 21, 2013, 08:04:55 PM »
I am so glad I learned how to tell if fabric is off grain and how to pull it back. Have had to do it many times. I usually buy an extra 1/4 to 1/3 yard of woven fabric in case it's off.
#### VorFemme
• Member
• Posts: 13768
• It's too darned hot! (song from Kiss Me, Kate)
##### Re: "I'm never shopping THERE again!" Share your story!
« Reply #4742 on: September 21, 2013, 08:11:08 PM »
Sadly, some of the newer fabrics cannot be gotten back on grain - all the processing for wrinkle free and so forth leaves it with a memory of being off-grain.
Ripping to get it on grain works for some fabrics (it is a pain in the asterisk for tapestry due to all the various fine threads to weave the designs in - but at least tapestries aren't processed to a fare-thee-well and will stay on grain once they have been cut with a drawn thread). Some of the ******* blends will fight you warp & weft to get back to their chemically altered slanted grain line - steaming with vinegar & water helps - but doesn't always help ENOUGH.
Let sleeping dragons be.......morning breath......need I explain?
#### gmatoy
• Member
• Posts: 2935
##### Re: "I'm never shopping THERE again!" Share your story!
« Reply #4743 on: September 21, 2013, 09:22:45 PM »
I once worked at a finishing plant for woven textiles and at certain points in processing samples had to be removed from a piece of fabric. The company REQUIRED employees to tear rather than cut fabric. They were allowed to use scissors to start the tear but tearing was required due to the "skew" that would be caused in processing if the end of the fabric was not straight.
I saw this demostrated by a sample piece of fabric we got from another factory after the samples had been cut rather than torn. That plant manager allowed cutting because unprocessed corduroy and velveteen are pretty tough and do take a bit of strength to tear. It looked like a kite, it was so skewed. Multiply that by thousands of yards and you might understand why that plant manager was eventually fired (maybe this should be in the PD thread too).
For those who don't sew - fabric that is cut & sewn into a garment "off grain" is going to hang oddly. Usually this ranges from "not the best idea" to "really, really bad idea" - because a twisted garment is going to fight to hang the way it wants instead of the way it is supposed to hang.
I've seen a "plain straight skirt" that tried to hang as if it were twisted like the diagonal stripes on a peppermint stick...it had not bee cut to be a spiral skirt - so the twist made it at least a size too small for the person trying to wear it. They had no idea WHY it didn't fit (and it had been bought not made by them - probably why the skirt was on the clearance rack was because it didn't fit correctly).
Very rarely, it might "work" - but only if the fabric is about 1% to 2% off grain (there are a very, very few fabrics that work just as well when cut slightly off true grain - off hand, I can't think of any that work "better" cut off grain).
Woodworkers - think trying to force a warping board back to shape. It's more work than it's worth!
Thanks to both of you for explaining why this makes such a difference! Sometimes you are so involved in telling your story that you miss some of the details! Again, thank you!
#### Elfmama
• Member
• Posts: 4595
##### Re: "I'm never shopping THERE again!" Share your story!
« Reply #4744 on: September 21, 2013, 09:33:07 PM »
ON a nostalgic note, who remembers the days when every fabric counter had a little machine that the fabric was fed though? As you pulled it through, the machine measured it, and at the preset stopping point, made a little notch in the fabric so it could be torn at that point.
My grandmother hated her fabric to be torn - she felt it was more likely to go off-grain than if it was cut. Every time I hear someone tearing fabric in a store I hear my grandma shrieking in outrage! :-)
I remember those machines. They'd give you about 37-38 inches of fabric in a nominal yard (36 inches, for our metric people). That's probably why you don't see them any more.
And the grandmother shrieking reminded me of a story from very early in my marriage. If you've ever worked with velvet, you know that the only way to get a really straight line is to tear it. I'd bought velvet for I think $5/yard. (Ordinary cotton calico was about 50 cents/yd, so you can see that it wasn't cheap.) And when I started working with it and ripped it, I thought DH was going to levitate right through the ceiling. "WHAT ARE YOU DOING?!?!?" ~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~ Common sense is not a gift, but a curse. Because then you have to deal with all the people who don't have it. ~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~ #### gmatoy • Member • Posts: 2935 ##### Re: "I'm never shopping THERE again!" Share your story! « Reply #4745 on: September 21, 2013, 09:41:25 PM » Ah, my DH knows that I know what I'm doing with fabrics and never says a word! #### Kaymyth • Member • Posts: 939 ##### Re: "I'm never shopping THERE again!" Share your story! « Reply #4746 on: September 21, 2013, 11:57:13 PM » Well, I dropped our local cable company for high-speed Internet service a couple of weeks ago. I do work from home which means I need a reliable connection. For the past few months the service from this local cable company has gone from okay to horrible. Every few weeks the signal to our modem would drop off. It would start out as a drop for a couple of minutes. Then it became 20-30 minutes. Then it would be out for a couple of hours. Call the cable company and they would check the connection from their end. The typical answer was that they could not see our modem. They will have a technician come out in 7-8 days. Thank you for calling, click. No troubleshooting, no trying to fix the problem over the phone. Wait a week for the technician to come. That is not acceptable for my business nor my personal needs. Well, this problem was one that would come and go. We could be down several hours or a couple of days. But, by the time the technician arrived (7-8 days later), the problem had cleared itself. The technician would check the modem, check the lines, and say everything is cleared up. He leaves. Three to four weeks later, this scenario repeats itself. When it happened the third time, I figured nothing was going to change. After being told the technician would be out in 7-8 days, thank you for calling, click, I decided enough was enough. I called the local phone company and set-up a DSL account. Within two days, I had the DSL service up and running. It is a slower service, but it is at least reliable and they provide good technical support. I took the modem back to the cable company and closed the high-speed internet account. The ironic thing? The cable company did not cancel the technician visit. So, days after I had canceled the service, the technician arrives to check things out. I had this exact problem some years ago when I was stuck with Comcast. The problem turned out to be an extremely variable signal on their end; basically, the signal strength would wobble up and down and all over the map because they didn't actually care enough about the region to bother to fix the lines. Signal strength on a cable modem is tricky; the actual cable TV can handle a much larger range than the modem can. Go too high, drop too low, whoops, it's out. I finally got a cable guy to leave me a set of splitters with different signal cuts on them. When the signal would shoot too high, I'd put it on the larger signal cut splitter. When it'd drop too low, I'd switch back to the smaller. It was annoying, but it kept me in internet until I moved out of that place. #### perpetua • Member • Posts: 1922 ##### Re: "I'm never shopping THERE again!" Share your story! « Reply #4747 on: September 22, 2013, 07:04:53 AM » With apologies for length: anyone who gets through this in one piece gets a cookie - Currys/PC World (well, I probably will, but I'm not happy with them) and more specifically, their repair/tech service, KnowHow. Also, Acer, because their goods are obviously really shoddily made. In February I bought an 11.6 inch Acer laptop. Lovely little thing, which I picked especially because it was small and light and easy for me to carry around in a backpack on my crutches. Within two weeks it became stuck in repair mode on boot up, and on Windows 8 there's apparently no way to get out of that. Take it back, exchange it for another. Three weeks later, *that* one develops a fault where if you move the screen at all, it powers off. Take it back again. They try to insist it goes in for repair; since I haven't had enough time since the last one to get much personal stuff on there, I say no, this is the second that's gone wrong, exchange it please. Finally they do. A couple of months later, the *third* Acer develops a problem with either the power supply or the port that the power supply goes into, because it won't recognise it's plugged into the mains. The battery runs down and, of course, it won't charge or boot. So I call PC World's tech support/repair service, KnowHow. The first tech I speak to tells me I need to "change the fuse in the plug". But it's a sealed unit, I say. It isn't a problem with the plug. Besides, I have an old power supply left over from one of the other two Acers that got exchanged, and that doesn't work either. It's obviously a problem with the connection on the motherboard or the socket. He keeps insisting the fuse needs to be changed. I finally ask to be put through to someone else, said someone else agrees it sounds like a motherboard or power supply issue, and arranges to collect it for repair. The next day, the collections people turn up and try to take my laptop away without giving me proof of collection. No, says I. I insist you leave me with *something* to show you've collected it. So he has to go back to the van to write out another 'stock in transit' form, because all three copies of the triplicate are used within the organisation. Laptop is due to be returned a week later, on the Thursday. I call and inform them that I'll be out that day, and could it be delivered back to me on Friday instead? No problem, we'll put the request through. Later that day, I check the tracker; it's still showing Thursday. I call back, and they insist the request for Friday has been put through and the tracker just hasn't updated yet. But please be assured you *will* receive it on Friday. Fine, thank you very much. Friday morning, I log into the tracker to check the time of delivery, and discover it's not being delivered till Saturday. Not happy, I call them back, explain, and they say the vans have already left for the day and they can't get it out to me Friday. OK, fine. It's only an extra day and I've got my old iMac, no biggie. Saturday morning, I'm sitting in my living room and I hear some rattling of the mailbox on the external door to the flats. I rush out to the front door just in time to see the PC World delivery men disappearing down the steps back to the van. I call after them. You didn't ring the bell, I said. "We rang the top one", he said. The one clearly labelled Flat D. But I'm Flat A, the bottom one, I say. Like on the paperwork? Oh yeah. Sorry, they say. Disaster averted, I sign for my laptop. I go back into the living room, excitedly unwrap my repaired laptop, boot it up, and my login is gone. They've FORMATTED THE HARD DRIVE. For a power supply issue. I call them back. Admittedly, I am furious, and not my most polite. "You were warned about data loss" they say. "But why did they even do this, when the issue was the power supply?!" I say. "Well the tech thought it was running a bit slow, and we have to return it to you in usable condition". All my data is gone. All my apps, everything. Thankfully I have backups of some things, but not all, because this all happened in a hurry and since the darned thing wouldn't boot, I couldn't do a full backup before I sent it in. All my old emails, for example: gone. And it's not Outlook, so no handy .pst file. I ask about data recovery. "If you want to do that, you'll have to take it back to the store, and they'll charge you for it", they say. "But this is your fault", I say. "We're not liable. You were warned about data loss", they say. Yes, IF the original problem was something to do with the hard drive, maybe?!?! This goes around for half an hour until I hang up, frustrated. I then think "Better plug it in, battery's a bit low". I plug the power supply in and - you know where this is going, right? Not recognised. The tech hadn't even *looked* at the original problem, let alone fixed it. So, my laptop is wiped, for no reason, and still broken. Eventually, I get through to the support line. We can pick it up for another repair on Thursday, they said. Not really acceptable, I said. You've already had it a week, not fixed the original problem, wiped my hard drive for no reason, and returned it a day late. Please collect it today. We can't do that. Eventually I got so p1ssed off with the situation that rather than go through KnowHow again, I marched it back into the store and asked them to exchange it for another brand, which to their credit, they did, even though it was out of the exchange time period. I now have an HP, which I hope will not break three times in six months. But all my data is still gone. You couldn't make it up, could you? « Last Edit: September 22, 2013, 07:15:16 AM by perpetua » #### siamesecat2965 • Member • Posts: 9055 ##### Re: "I'm never shopping THERE again!" Share your story! « Reply #4748 on: September 22, 2013, 09:04:04 AM » With apologies for length: anyone who gets through this in one piece gets a cookie - Currys/PC World (well, I probably will, but I'm not happy with them) and more specifically, their repair/tech service, KnowHow. Also, Acer, because their goods are obviously really shoddily made. In February I bought an 11.6 inch Acer laptop. Lovely little thing, which I picked especially because it was small and light and easy for me to carry around in a backpack on my crutches. Within two weeks it became stuck in repair mode on boot up, and on Windows 8 there's apparently no way to get out of that. Take it back, exchange it for another. Three weeks later, *that* one develops a fault where if you move the screen at all, it powers off. Take it back again. They try to insist it goes in for repair; since I haven't had enough time since the last one to get much personal stuff on there, I say no, this is the second that's gone wrong, exchange it please. Finally they do. A couple of months later, the *third* Acer develops a problem with either the power supply or the port that the power supply goes into, because it won't recognise it's plugged into the mains. The battery runs down and, of course, it won't charge or boot. So I call PC World's tech support/repair service, KnowHow. The first tech I speak to tells me I need to "change the fuse in the plug". But it's a sealed unit, I say. It isn't a problem with the plug. Besides, I have an old power supply left over from one of the other two Acers that got exchanged, and that doesn't work either. It's obviously a problem with the connection on the motherboard or the socket. He keeps insisting the fuse needs to be changed. I finally ask to be put through to someone else, said someone else agrees it sounds like a motherboard or power supply issue, and arranges to collect it for repair. The next day, the collections people turn up and try to take my laptop away without giving me proof of collection. No, says I. I insist you leave me with *something* to show you've collected it. So he has to go back to the van to write out another 'stock in transit' form, because all three copies of the triplicate are used within the organisation. Laptop is due to be returned a week later, on the Thursday. I call and inform them that I'll be out that day, and could it be delivered back to me on Friday instead? No problem, we'll put the request through. Later that day, I check the tracker; it's still showing Thursday. I call back, and they insist the request for Friday has been put through and the tracker just hasn't updated yet. But please be assured you *will* receive it on Friday. Fine, thank you very much. Friday morning, I log into the tracker to check the time of delivery, and discover it's not being delivered till Saturday. Not happy, I call them back, explain, and they say the vans have already left for the day and they can't get it out to me Friday. OK, fine. It's only an extra day and I've got my old iMac, no biggie. Saturday morning, I'm sitting in my living room and I hear some rattling of the mailbox on the external door to the flats. I rush out to the front door just in time to see the PC World delivery men disappearing down the steps back to the van. I call after them. You didn't ring the bell, I said. "We rang the top one", he said. The one clearly labelled Flat D. But I'm Flat A, the bottom one, I say. Like on the paperwork? Oh yeah. Sorry, they say. Disaster averted, I sign for my laptop. I go back into the living room, excitedly unwrap my repaired laptop, boot it up, and my login is gone. They've FORMATTED THE HARD DRIVE. For a power supply issue. I call them back. Admittedly, I am furious, and not my most polite. "You were warned about data loss" they say. "But why did they even do this, when the issue was the power supply?!" I say. "Well the tech thought it was running a bit slow, and we have to return it to you in usable condition". All my data is gone. All my apps, everything. Thankfully I have backups of some things, but not all, because this all happened in a hurry and since the darned thing wouldn't boot, I couldn't do a full backup before I sent it in. All my old emails, for example: gone. And it's not Outlook, so no handy .pst file. I ask about data recovery. "If you want to do that, you'll have to take it back to the store, and they'll charge you for it", they say. "But this is your fault", I say. "We're not liable. You were warned about data loss", they say. Yes, IF the original problem was something to do with the hard drive, maybe?!?! This goes around for half an hour until I hang up, frustrated. I then think "Better plug it in, battery's a bit low". I plug the power supply in and - you know where this is going, right? Not recognised. The tech hadn't even *looked* at the original problem, let alone fixed it. So, my laptop is wiped, for no reason, and still broken. Eventually, I get through to the support line. We can pick it up for another repair on Thursday, they said. Not really acceptable, I said. You've already had it a week, not fixed the original problem, wiped my hard drive for no reason, and returned it a day late. Please collect it today. We can't do that. Eventually I got so p1ssed off with the situation that rather than go through KnowHow again, I marched it back into the store and asked them to exchange it for another brand, which to their credit, they did, even though it was out of the exchange time period. I now have an HP, which I hope will not break three times in six months. But all my data is still gone. You couldn't make it up, could you? Wow. You have infinitely more patience than I would have every had. after the second time, I would have done what you did in the end. that's just crazy. I will say, I have a 3+ year old HP laptop which is still chugging away, and I've had absolutely no issues with it. None at all. So hopefully yours will behave as well. #### VorFemme • Member • Posts: 13768 • It's too darned hot! (song from Kiss Me, Kate) ##### Re: "I'm never shopping THERE again!" Share your story! « Reply #4749 on: September 22, 2013, 10:09:13 AM » Mom has an Acer laptop that she travels with - it runs Vista, so it's "older". It's been maxed out on the RAM about two years ago. The problem NOW is that one hinge fails because the laptop got dropped and part of the base is broken. The parts are available on eBay to fix it for around$40 or \$65 (depending on which vendor and whether you get just the one part or the whole base of the laptop & do some Frankenstein-style engineering). Frankenstein-style engineering is easier as you don't have to take the laptop apart into nearly as many small pieces... Taking it apart into smaller pieces is cheaper for the parts - but takes a lot more time.
I am not telling my mother to just wipe it or hook it up to an external monitor instead of using it as a laptop. It's her money, her data, and her decision. Well, my dad keeps chiming in with his opinions, but it's half his money. He only uses the computer for bridge, though, so his technical opinions are a little unhelpful.
Let sleeping dragons be.......morning breath......need I explain?
#### jedikaiti
• Swiss Army Nerd
• Member
• Posts: 2771
• A pie in the hand is worth two in the mail.
##### Re: "I'm never shopping THERE again!" Share your story!
« Reply #4750 on: September 22, 2013, 11:44:31 AM »
And perpetua has hit on why I will never trust a box store's tech support with my computers.
A friend of mine once took her laptop to a box store for repair while her husband was deployed to a combat zone, they decided her antivirus was pirated (it wasn't) and disabled it. Between that and the gal whose laptop was STOLEN by techs working for the same chain, they will never get my repair business.
What part of v_e = \sqrt{\frac{2GM}{r}} don't you understand? It's only rocket science!
"The problem with re-examining your brilliant ideas is that more often than not, you discover they are the intellectual equivalent of saying, 'Hold my beer and watch this!'" - Cindy Couture
#### misha412
• Member
• Posts: 345
##### Re: "I'm never shopping THERE again!" Share your story!
« Reply #4751 on: September 22, 2013, 11:55:24 AM »
And perpetua has hit on why I will never trust a box store's tech support with my computers.
A friend of mine once took her laptop to a box store for repair while her husband was deployed to a combat zone, they decided her antivirus was pirated (it wasn't) and disabled it. Between that and the gal whose laptop was STOLEN by techs working for the same chain, they will never get my repair business.
Ugh, big box store techs. I wouldn't trust them with a 10 mile pole.
Also, DO NOT download the software that claims it can fix a slow PC or one that has problems. A friend of Mr. M did this and basically lost everything on his PC. Mr. M is a computer tech with decades of experience and even he could not fix the problems this software did to the hard drive. He eventually had to wipe the entire drive and start all over. Just don't do it.
#### Snooks
• Member
• Posts: 2563
##### Re: "I'm never shopping THERE again!" Share your story!
« Reply #4752 on: September 22, 2013, 11:59:31 AM »
Can't remember if I've mentioned this but the independent coffee shop in the next road will never get my business, at least in Starbucks they're friendly. In there unless you're part of their inner sanctum you're not worth paying attention to, as in, they stare at you blankly as you approach the counter, don't offer any form of advice about the myriad of ways they serve their coffee and any comments on Trip Advisor are met with denials and insults from the owner.
#### misha412
• Member
• Posts: 345
##### Re: "I'm never shopping THERE again!" Share your story!
« Reply #4753 on: September 22, 2013, 12:00:57 PM »
Well, I dropped our local cable company for high-speed Internet service a couple of weeks ago.
I do work from home which means I need a reliable connection. For the past few months the service from this local cable company has gone from okay to horrible. Every few weeks the signal to our modem would drop off. It would start out as a drop for a couple of minutes. Then it became 20-30 minutes. Then it would be out for a couple of hours.
Call the cable company and they would check the connection from their end. The typical answer was that they could not see our modem. They will have a technician come out in 7-8 days. Thank you for calling, click. No troubleshooting, no trying to fix the problem over the phone. Wait a week for the technician to come. That is not acceptable for my business nor my personal needs.
Well, this problem was one that would come and go. We could be down several hours or a couple of days. But, by the time the technician arrived (7-8 days later), the problem had cleared itself. The technician would check the modem, check the lines, and say everything is cleared up. He leaves. Three to four weeks later, this scenario repeats itself.
When it happened the third time, I figured nothing was going to change. After being told the technician would be out in 7-8 days, thank you for calling, click, I decided enough was enough. I called the local phone company and set-up a DSL account. Within two days, I had the DSL service up and running. It is a slower service, but it is at least reliable and they provide good technical support.
I took the modem back to the cable company and closed the high-speed internet account. The ironic thing? The cable company did not cancel the technician visit. So, days after I had canceled the service, the technician arrives to check things out.
I had this exact problem some years ago when I was stuck with Comcast. The problem turned out to be an extremely variable signal on their end; basically, the signal strength would wobble up and down and all over the map because they didn't actually care enough about the region to bother to fix the lines. Signal strength on a cable modem is tricky; the actual cable TV can handle a much larger range than the modem can. Go too high, drop too low, whoops, it's out.
I finally got a cable guy to leave me a set of splitters with different signal cuts on them. When the signal would shoot too high, I'd put it on the larger signal cut splitter. When it'd drop too low, I'd switch back to the smaller. It was annoying, but it kept me in internet until I moved out of that place.
That is the exact problem. Mr. M and I are both computer geeks and figured out the problem months ago and told the cable company about it. (We live in a more rural area of the county). The cable company claimed they had "boosted" the signal, but the problem kept occurring. We even changed out modems at least three or four times. It just got to the point of being ridiculous and I was aggravated to the point of (doing something not e-hell approved).
#### Sara Crewe
• Member
• Posts: 2928
##### Re: "I'm never shopping THERE again!" Share your story!
« Reply #4754 on: September 22, 2013, 12:03:39 PM »
Can't remember if I've mentioned this but the independent coffee shop in the next road will never get my business, at least in Starbucks they're friendly. In there unless you're part of their inner sanctum you're not worth paying attention to, as in, they stare at you blankly as you approach the counter, don't offer any form of advice about the myriad of ways they serve their coffee and any comments on Trip Advisor are met with denials and insults from the owner.
And when they go under it will be because of competition from the chains and nothing to do with their poor customer service.
|
2017-10-18 18:08:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3140009045600891, "perplexity": 3809.2216664672183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823067.51/warc/CC-MAIN-20171018180631-20171018200631-00698.warc.gz"}
|
https://numericalshadow.org/doku.php?id=numerical-range:generalizations:application-of-higher-rank-and-p-k-numerical-range
|
The web resource on numerical range and numerical shadow
### Site Tools
numerical-range:generalizations:application-of-higher-rank-and-p-k-numerical-range
# Application of higher rank numerical range and $(p,k)$- numerical range
In this section we present the motivation behind introduce definitions of higher rank numerical range and $(p,k)$- numerical range. Let $M_n$ will be the set of all matrices of dimension $n$. We will consider linear mapping transforming given matrix into another matrix. Such mapping can be represented by operator sum representation (Kraus representation) as $$\Phi(X) = \sum_{i} A_i X A_i$$ for some matrices $A_i$. The special linear mapping transforming state into another state is well-known as quantum channel. One would like to consider a recovery channel $R$ such that $R \circ \Phi(X) = X$ whenever $PXP=X$ for some orthogonal projection $P$. The range space of $P$ is known as a quantum error correction code of the channel $\Phi$. The task is finding $P$ with a maximum rank. For a given quantum channel $\Phi$ this problem is equivalent to existing scalars $\lambda_{i,j} \in \mathbb{C}$ such that $$PA_i^\dagger A_i P = \lambda_{i,j} P \text{ for all } 1\le i,j\le r.$$ This leads to the study higher rank numerical range.
We can also naturally extend above error correction scheme [1]. Now we consider that for a given quantum channel $\Psi$ we would like to find a recovery channel $R$ such that for each $B \in M_k$ $$R \circ \Psi \left( \left( \1_p \otimes B \right) \oplus \mathbf{0}_{n-pk} \right) = \left( A_B \otimes B \right) \oplus \mathbf{0}_{n-pk}$$ for some $A_B \in M_p$. Analogously, his problem can be is reduced to showing that such recovery channel $R$ exists if and only if there are scalars $\lambda_{ijrs} \in \mathbb{C}$ such that $$P_{kk} A_i^\dagger A_j P_{ll} = \lambda_{ijrs} P_{kl} \text{ for all } 1\le i,j\le r \text{ and } 1\le k,l \le p$$ where $P_{kl} = ( \ket{k}\bra{l} \otimes \mathbf{1}_k ) \oplus \mathbf{0}_{n-pk}$ with fixed an arbitrary orthonormal basis $\{ e_1, \ldots, e_p \}$ in $\mathbb{C}^p$. This approach we can simplify to consideration $(p,k)$ numerical range.
1. Man-Duen Choi, Nathaniel Johnston, David W Kribs, 2009. The multiplicative domain in quantum error correction. Journal of Physics A: Mathematical and Theoretical, 42, IOP Publishing, pp.245303.
|
2020-04-02 09:03:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.8771060705184937, "perplexity": 225.07590421741455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00559.warc.gz"}
|
https://proxieslive.com/tag/formula/
|
## Is this the simplest way to visually prove the “scalene trapezoid area formula”
Please refer to image. Is there a simpler way to visually prove the scalene trapezoid area formula?
My colleague has a spreadsheet full of links. She copy / pasted this from a variety of different sources.
For a number of reasons, I want to convert these from just plain copy / pasted links to HYPERLINK formula cells.
Is there anyway to do this?
## Google Sheets: Conditional Formatting – Custom Formula – Text Contains
Okay, so I have dates in column B that include weekdays in the following format: DDD, MMM DD, YYY (For example, Sat, Jan 19, 2019).
I want each row to auto format to a specific background colour based on the weekday in column B.
So, if B1 contains “Sat” I want row 1 to turn blue. If B100 contains “Sat” I want row 100 to turn blue.
I have tried using suggestions from these threads like regexmatch: Conditional formatting based on portion of text
or countif: https://stackoverflow.com/questions/27723102/finding-partial-texts-in-conditional-formatting-custom-formula-is
But neither of these seem to work for me.
I am at a loss. Any help is appreciated.
## A new formula for the class number of the quadratic field $\mathbb Q(\sqrt{(-1)^{(p-1)/2}p})$?
I have the following conjecture involving a possible new formula for the class number of the quadratic field $$\mathbb Q(\sqrt{(-1)^{(p-1)/2}p})$$ with $$p$$ an odd prime.
Conjecture. Let $$p$$ be an odd prime and let $$p^*=(-1)^{(p-1)/2}p$$. Then the class number $$h(p^*)$$ of the quadratic field $$\mathbb Q(\sqrt{p^*})$$ coincides with the number
$$\frac{(\frac{-2}p)}{2^{(p-3)/2}p^{(p-5)/4}}\det\left[\cot\pi\frac{jk}p\right]_{1\le j,k\le (p-1)/2},$$ where $$(\frac{\cdot}p)$$ is the Legendre symbol.
This is Conjecture 5.1 in my preprint arXiv:1901.04837. I have checked it for all odd primes $$p<29$$. Note that $$h(p^*)=1$$ for each odd prime $$p<23$$, and $$h(-23)=3$$.
Here I invite some of you to check this conjecture further. My computer cannot check it even for $$p=29$$.
## Derivation of Perceptron weight update formula
I’ve started out studying Machine Learning and am currently reading up about how a single perceptron works. From the wikipedia page, my understanding is as follows: suppose we have an input sample $$\mathbf{x} = [x_1, \ldots, x_n]^T$$, an initial weight vector $$\mathbb{w} = [w_1, \ldots, w_n]^T$$. Let the true output corresponding to $$\mathbf{x}$$ be $$y’$$.
The output given by the perceptron is $$y = f(\sum_{i=0}^n w_ix_i)$$, where $$w_0$$ is the bias and $$x_0=1$$. If $$\eta$$ is the learning rate, the weights are updated according to the following rule: $$\Delta w_i = \eta x_i(y’-y)$$
This is according to wikipedia. But I know the weights are updated on the basis of the gradient descent method, and I found another nice explanation based on the gradient descent method HERE. The derivation there results in the final expression for weight update:
$$\Delta w_i = \eta x_i(y’-y)\frac{df(S)}{dS}$$
where $$S = \sum_{i=0}^{n}w_ix_i$$. Is there a reason why this derivative term is ignored? There was another book that mentioned the same weight update formula as Wikipedia, without the derivative term. I’m pretty sure we can’t just assume $$f(S) = S$$.
## Fórmula compleja con PHP/MySqli
Estoy haciendo unos cálculos en mysqli que no encuentro la forma de hacerlo. Tengo una tabla, que suma los puntos de cada equipo, perfecto, pero necesito hacer un cálculo adicional en la cual necesito ayuda.
Mi problema es con la columna en negrilla marcada como SB. Para calcularla se debe proceder de la siguiente manera:
1.- Cada valor a sumar es el de la columna SUM.
2.- Solo se suma el valor, cuando un equipo ganó, es decir tiene 1 punto.
3.- Sumamos la mitad de puntos, de aquellos con quienes entablamos.
4.- Si un equipo perdió, no se suma nada.
Ver la tabla:
Ejemplo 1: El equipo RDO:
RDO perdió con PRC, suma 0
RDO ganó a BRA, suma 2 de la columna SUM
RDO empató con ARG, suma 1/2 de 4.5 de la columna SUM
RDO ganó a Ven, suma 2 de la columna SUM
RDO ganó a Nic, suma 2 de la columna SUM
SB de RDO = 2+4.5/2+2+2 = 8.25
Todo esto lo hago en excel si problema, pero en php/mysqli, necesito ayuda.
Alguna idea?
## Auto-Sort script which excludes formula if ascending False
I am looking to exclude formulas or blank cells when I am running my auto sort. I used the following… It sorts fine, but I wanted the numbers to be ascending: False. When I change this, it takes account of the formula I have set for my column I am sorting by and puts them all at the top and my actual data points to at the bottom where my formulas stopped… Is there a way to exclude this?
function onEdit(e) { var ss = SpreadsheetApp.getActiveSpreadsheet(); var sheet = ss.getSheetByName("G2 List") var range = sheet.getRange("A2:T200"); var columnToSortBy = 14; // Sorts by the values in column 14 (N) range.sort({column: 14, ascending: false}); }
## Google Sheets Formula for Proper Case with Apostrophe
I’m using a Google Docs spreadsheet glued together (not be me) from multiple sources where some of the names are in all caps. In trying to clean web ready content, I have good success with the proper(#ref) function except when it is a name with an apostrophe.
If the cell contains SMITTY'S running it through the proper() function returns Smitty'S and is ugly. Any clever ideas around this?
## Generalize Wu formula to general Bockstein homomorphisms
The classical Wu formula claims that $$Sq^1(x_{d-1})=w_1(TM)\cup x_{d-1}$$ on a $$d$$-manifold $$M$$, where $$x_{d-1}\in H^{d-1}(M,\mathbb{Z}_2)$$.
I wonder whether there is a generalization of the classical Wu formula to general Bockstein homomorphisms. We consider the Bockstein homomorphism $$\beta_{(2,2^n)}:H^*(-,\mathbb{Z}_{2^n})\to H^{*+1}(-,\mathbb{Z}_2)$$ which is associated to the extension $$\mathbb{Z}_2\to\mathbb{Z}_{2^{n+1}}\to\mathbb{Z}_{2^n}$$.
I guess there is a generalized Wu formula: $$\boxed{\beta_{(2,2^n)}(x_{d-1})=\frac{1}{2^{n-1}}\tilde w_1(TM)\cup x_{d-1}}$$ on a $$d$$-manifold $$M$$, where $$x_{d-1}\in H^{d-1}(M,\mathbb{Z}_{2^n})$$.
Here $$\tilde w_1(TM)$$ is the twisted first Stiefel-Whitney class of the tangent bundle $$TM$$ of $$M$$ which is the pullback of $$\tilde w_1$$ under the classifying map $$M\to BO(d)$$. Let $$\mathbb{Z}_{w_1}$$ denote the orientation local system, the twisted first Stiefel-Whitney class $$\tilde w_1\in H^1(BO(d),\mathbb{Z}_{w_1})$$ is the pullback of the nonzero element of $$H^1(BO(1),\mathbb{Z}_{w_1})=\mathbb{Z}_2$$ under the determinant map $$B\det:BO(d)\to BO(1)$$.
The right hand side makes sense since $$2\tilde w_1(TM)=0$$.
Can you help me to prove or disprove the boxed formula above?
Thank you!
## Proof that 2^yx+2^y-1 is an closed formula of f(1,x,y)
I should proof that 2^yx+2^y-1 is an closed formula of g(1,x,y) with Induktion or somthing else. Given is
$$g(n,x,y)=\begin{cases} x+y,\quad if\quad n=0 \ x,\quad \quad \quad if\quad n>0\quad and\quad y=0 \ f(n-1,f(n,x,y-1),f(n,x,y-1)+1,\quad else \end{cases}$$
|
2019-02-19 18:15:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 46, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7902821898460388, "perplexity": 2139.9190604417518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490806.45/warc/CC-MAIN-20190219162843-20190219184843-00050.warc.gz"}
|
http://blog.haberkucharsky.com/technology/2015/07/21/more-monads-in-ocaml.html
|
While popular in Haskell, monads can be encoded effectively in OCaml too. This post describes one such encoding, as well as the process of developing a modern OCaml program.
I assume that you know what monads are and why you should care about them. If this is not the case, then there are some excellent books 1 out there.
# Introduction
Haskell is famous for the integration of monads in its core language. Not only are functions like >>= supported syntactically through the do syntax 2, but monads are commonly used for program flow and used by necessity for managing and tracking effects. One of the reasons that it is so nice to use monads (and, for that matter, other abstractions like functors and monoids) in Haskell is its type-classes. Type-classes are a means of achieving so-called “ad-hoc polymorphism”: common interfaces that are shared between things that may share no other relation.
One example of a type-class defined in standard Haskell (the “Prelude”) is Show, which defines how to convert a value into a human-readable string. It makes sense that we would want to be able to show many different kinds of things. Show has the following definition (with an example instance for Bool):
There are some important characteristics of type-classes:
• Type-class functions can be applied to any type for which an instance is defined in scope somewhere. For example, as long as the Show Int instance is in scope, show 23 can be invoked without additional annotation.
• There can only be a single global instance in scope of a type-class for a particular type. This means that, for instance, that without language extensions, Show String cannot be defined differently than Show a => Show [a] since String is a type alias for [Char].
While Haskell popularized type-classes through its first-class support for the idiom, it’s absolutely possible to encode these concepts in other programming languages.
# Enter OCaml
I’m getting interested in OCaml again. It was actually the first functional programming language I learned back in 2007 or so (shortly afterwards, I learned Haskell), and there are several recent developments in the language and the ecosystem:
• The opam package manager makes it easier than ever to install OCaml and its libraries on your system, including seamless switching between versions.
• The freely-available online “Real World OCaml” book is very well-written and accessible.
• The MirageOS project for building so-called “unikernels” in OCaml: programs that are written in OCaml and that run on “bare metal” on the Xen hypervisor.
• There are rumors that OCaml will soon be multi-core ready. Currently, threads are multiplexed on a single CPU core.
• I hear that the new optimization engine, flambda, will be merged soon. It performs whole-program optimization a la MLton and apparently significantly improves performance.
I won’t attempt to make the case of OCaml versus Haskell, but it is worth noting OCaml is strict by default with strong support for lazy evaluation (unlike Haskell, in which the opposite is true).
Another aspect of OCaml that I find extremely compelling is its strong support for modular design through interfaces and its support for functors: modules that are functions of other modules. I was curious about how monads could be encoded in this system, and in particular, if monads in OCaml were viable for everyday use.
# Running these examples
I’m going to briefly outline how to get set-up with OCaml since there are still not a tonne of resources out there:
1. Install opam. Then set-up your environment via opam init --comp=4.02.2 (where OCaml 4.02.2 is the latest version as of this writing).
2. Install the utop interactive top-level with opam install utop. This is a more modern replacement for the default top-level with readline key-bindings and autocompletion. I’m not a huge fan of the curses “bling”, but it does the job.
3. (Optional) Install tools for developing OCaml programs: opam install merlin ocp-index ocp-indent. These tools (with the corresponding configuration in Emacs or Vim) make it possible to automatically indent OCaml code, jump to definitions, and dynamically inspect the types of values.
You can also find all of the OCaml source code in this post at the corresponding GitHub repository.
# Starting with a functor
One of the most simple higher-order structures in Haskell is the functor (this unfortunately shares a name with the language construct in OCaml). A functor is a structure that we can map over.
Note the file name of the snippet: in OCaml, the name of the file dictates the name of the module to which the contents belong. Therefore, we have just defined the Functor_class module, which consists of a module signature called Functor_class.S.
We can play with this definition with utop:
(Note the #mod_use directive. This loads the file by wrapping it in a module just like the OCaml compiler would. If we had typed #use instead, the definitions would be available as if we were inside the module at the top-level.)
Let’s look at a simple functor instance:
and how it could be used:
With the definition of functors in mind, look at the following definition of a monad structure:
In Haskell, type-classes can be defined in terms of a minimal set of functions that can be used to implement the other functions. In the case of the monad definition, a number of a functions are usually expressed in terms of two functions: >>= and return.
OCaml doesn’t have the notion of partially-implemented signatures in modules: either all values are abstract (module signatures) or all are concrete (modules). Thus, to mirror this notion of functions being defined in terms of others, we use a OCaml functor to “extend” the basic type-class functions of pure and bind into the rest of the monad functions. There are some more details of this technique in the functors chapter of “Real World OCaml”.
# Expressing relationships between higher-order structures
Any monad instance is a functor instance “for free”. We can express this idea in the revised definition of the functor class:
We’ve expressed that given any monad instance, we can obtain a functor instance by implementing map in terms of bind and pure.
# Looking at the option monad
The Maybe monad instance in Haskell is (in my opinion) the easiest to understand. Let’s see how it can be expressed in OCaml. Since types are lowercase in OCaml, we can create option.ml (yielding an Option module) for our definitions without any conflict with the language:
Let’s compile this module (along with its dependent modules) to see how it works using the ocamlbuild tool:
ocamlbuild option.cmo
Ocamlbuild creates compiled objects into the _build directory, so we should start-up utop with an amended include path: utop -I _build. Inside utop, we use the #load_rec directive to load the Option implementation and any modules that it depends on. We can use the #show directive to view a module’s signature from within the top-level, too.
# Monads with multiple type parameters: state
Some monads, like Haskell’s Maybe monad and IO monad, have only a single type parameter. However, many have more than one. It’s not immediately clear how to encode this into OCaml’s type system.
One solution is to use another functor. Let’s see how this works in an implementation in OCaml of the State monad in Haskell. In this case, the state type is an any module that defines a type t, and our module is parameterized on this state:
As an example, consider an example directly from “Functional Programming in Scala”: implementing a simple candy dispenser state machine. First, we’ll show the interface (which doesn’t reveal any state-monad implementation):
and then the implementation, using our state monad parameterized over Machine_state:
As a quick example of the candy dispenser:
You can compile this example with ocamlbuild main.native which will produce an executable main.native.
# Conclusions and next steps
It is undeniably more syntactically messy to define these structures in OCaml over Haskell. Nonetheless, I think it’s compelling that OCaml’s module system, which is so useful for structuring programs with interfaces, can define such high-level abstractions without breaking a sweat.
In a paradoxical way, I actually find that sometimes implementing these and other structures in OCaml (and similarly, sometimes Scala) offers more educational insight into how they all work, since there’s less magic with laziness and instances have to be brought directly into scope. I suspect it also makes readability clearer for heavily monadic code. A block that begins with
let open Option.Monad in
makes it abundantly clear that the monadic operations that follow are in the context of the option monad.
I’d like to fiddle in the future with monad transformers in OCaml, and maybe try writing a larger OCaml program that actually makes use of monads internally.
## Implicits
What’s very interesting is that there is a branch of OCaml that supports a sort-of hybrid of the existing module system and Haskell’s type-classes called implicit modules. This language construct is virtually identical to Scala’s implicit mechanism, and more details can be found here. The paper has lots of nifty examples.
|
2018-02-19 03:51:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4642777740955353, "perplexity": 1721.5764244631168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812327.1/warc/CC-MAIN-20180219032249-20180219052249-00092.warc.gz"}
|
https://gmatclub.com/forum/if-points-a-b-and-c-form-a-triangle-is-angle-abc-90-degre-55680.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 25 Apr 2019, 07:42
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If points A, B, and C form a triangle, is angle ABC>90 degre
Author Message
TAGS:
### Hide Tags
CEO
Joined: 21 Jan 2007
Posts: 2545
Location: New York City
If points A, B, and C form a triangle, is angle ABC>90 degre [#permalink]
### Show Tags
16 Nov 2007, 08:59
1
16
00:00
Difficulty:
85% (hard)
Question Stats:
46% (01:46) correct 54% (01:47) wrong based on 334 sessions
### HideShow timer Statistics
If points A, B, and C form a triangle, is angle ABC>90 degrees?
(1) AC = AB + BC − 0.001
(2) AC = AB
M15-24
Math Expert
Joined: 02 Sep 2009
Posts: 54543
### Show Tags
21 Apr 2014, 06:05
5
10
PathFinder007 wrote:
HI Bunnel,
Thanks.
THEORY:
Say the lengths of the sides of a triangle are a, b, and c, where the largest side is c.
For a right triangle: $$a^2 +b^2= c^2$$.
For an acute (a triangle that has all angles less than 90°) triangle: $$a^2 +b^2>c^2$$.
For an obtuse (a triangle that has an angle greater than 90°) triangle: $$a^2 +b^2<c^2$$.
Points A, B and C form a triangle. Is ABC > 90 degrees?
(1) AC = AB + BC - 0.001.
If AC=0.001, AB=0.001 and BC=0.001, then the triangle will be equilateral, thus each of its angles will be 60 degrees.
If AC=10, AB=5 and BC=5.001, then AC^2>AB^2+BC^2, which means that angle ABC will be more than 90 degrees.
Not sufficient.
(2) AC = AB --> triangle ABC is an isosceles triangle --> angles B and C are equal, which means that angle B cannot be greater than 90 degrees. Sufficient.
Similar questions to practice:
are-all-angles-of-triangle-abc-smaller-than-90-degrees-129298.html
if-10-12-and-x-are-sides-of-an-acute-angled-triangle-ho-90462.html
Hope it's clear.
_________________
##### General Discussion
Senior Manager
Joined: 09 Aug 2006
Posts: 489
Re: C 15.24 degrees of a triangle [#permalink]
### Show Tags
16 Nov 2007, 21:04
bmwhype2 wrote:
Points A, B and C form a triangle. Is ABC > 90 degrees?
1. AC = AB + BC - .001
2. AC = AB
I think the answer is A.
S1 :
AC = AB + BC - .001
AC + .001 = AB + BC
Squaring B.S,
(AC + .001) ^ 2 = (AB + BC )^2
AC^2 + 2AC*.001 = AB^2 + 2AB.BC + BC^2
AC^2 = AB^2 + BC^2 + 2AB.BC + 2AC*.001
By Pythagoras Thm, Angle ABC is 90 if AC^2 = AB^2 + BC^2.
Here AC is bigger than that.. which implies that Angle ABC is > 90. Hence Suff.
PS : (I do not know of any formal rule/theorem that states that if Hypotenuse exceeds more than sum of the squares of the sides that the angle > 90. In fact that triangle is no more a right triangle but I just based this on my intuition. I just took an example of triangle with sides 3,4 and 5. )
St2 :
As AC is the Hypo which should be the longest side of a triangle. But here, the Hypotenuse is equal to a side hence this triangle is not a right triangle but we don't know if angle ABC > 90.. Hence In-suff.
Manager
Joined: 25 Jul 2007
Posts: 105
Re: C 15.24 degrees of a triangle [#permalink]
### Show Tags
16 Nov 2007, 21:47
Amit05 wrote:
bmwhype2 wrote:
Points A, B and C form a triangle. Is ABC > 90 degrees?
1. AC = AB + BC - .001
2. AC = AB
I think the answer is A.
S1 :
AC = AB + BC - .001
AC + .001 = AB + BC
Squaring B.S,
(AC + .001) ^ 2 = (AB + BC )^2
AC^2 + 2AC*.001 = AB^2 + 2AB.BC + BC^2
AC^2 = AB^2 + BC^2 + 2AB.BC + 2AC*.001
By Pythagoras Thm, Angle ABC is 90 if AC^2 = AB^2 + BC^2.
Here AC is bigger than that.. which implies that Angle ABC is > 90. Hence Suff.
PS : (I do not know of any formal rule/theorem that states that if Hypotenuse exceeds more than sum of the squares of the sides that the angle > 90. In fact that triangle is no more a right triangle but I just based this on my intuition. I just took an example of triangle with sides 3,4 and 5. )
St2 :
As AC is the Hypo which should be the longest side of a triangle. But here, the Hypotenuse is equal to a side hence this triangle is not a right triangle but we don't know if angle ABC > 90.. Hence In-suff.
I find myself inclined to agree with your logic about statement 1.
However, I find statement 2 to be sufficient by itself as well.
If ac=ab, then angle ABC = angle ACB.
Therefore angle ABC cannot be greater than 90.
Senior Manager
Joined: 09 Aug 2006
Posts: 489
Re: C 15.24 degrees of a triangle [#permalink]
### Show Tags
16 Nov 2007, 22:18
jbs wrote:
Amit05 wrote:
bmwhype2 wrote:
Points A, B and C form a triangle. Is ABC > 90 degrees?
1. AC = AB + BC - .001
2. AC = AB
I think the answer is A.
S1 :
AC = AB + BC - .001
AC + .001 = AB + BC
Squaring B.S,
(AC + .001) ^ 2 = (AB + BC )^2
AC^2 + 2AC*.001 = AB^2 + 2AB.BC + BC^2
AC^2 = AB^2 + BC^2 + 2AB.BC + 2AC*.001
By Pythagoras Thm, Angle ABC is 90 if AC^2 = AB^2 + BC^2.
Here AC is bigger than that.. which implies that Angle ABC is > 90. Hence Suff.
PS : (I do not know of any formal rule/theorem that states that if Hypotenuse exceeds more than sum of the squares of the sides that the angle > 90. In fact that triangle is no more a right triangle but I just based this on my intuition. I just took an example of triangle with sides 3,4 and 5. )
St2 :
As AC is the Hypo which should be the longest side of a triangle. But here, the Hypotenuse is equal to a side hence this triangle is not a right triangle but we don't know if angle ABC > 90.. Hence In-suff.
I find myself inclined to agree with your logic about statement 1.
However, I find statement 2 to be sufficient by itself as well.
If ac=ab, then angle ABC = angle ACB.
Therefore angle ABC cannot be greater than 90.
Ooops .. I missed that .. I think these are the traps that are set by GMAC to fool us around..
yes, D it is ..
Good question !!
Director
Joined: 09 Aug 2006
Posts: 717
Re: C 15.24 degrees of a triangle [#permalink]
### Show Tags
16 Nov 2007, 23:20
Amit05 wrote:
bmwhype2 wrote:
Points A, B and C form a triangle. Is ABC > 90 degrees?
1. AC = AB + BC - .001
2. AC = AB
I think the answer is A.
S1 :
AC = AB + BC - .001
AC + .001 = AB + BC
Squaring B.S,
(AC + .001) ^ 2 = (AB + BC )^2
AC^2 + 2AC*.001 = AB^2 + 2AB.BC + BC^2
AC^2 = AB^2 + BC^2 + 2AB.BC + 2AC*.001
--> AC^2 = AB^2 + BC^2 + 2AB.BC - (2AC*.001 + .001^2)
By Pythagoras Thm, Angle ABC is 90 if AC^2 = AB^2 + BC^2.
Here AC is bigger than that.. which implies that Angle ABC is > 90. Hence Suff.
PS : (I do not know of any formal rule/theorem that states that if Hypotenuse exceeds more than sum of the squares of the sides that the angle > 90. In fact that triangle is no more a right triangle but I just based this on my intuition. I just took an example of triangle with sides 3,4 and 5. )
St2 :
As AC is the Hypo which should be the longest side of a triangle. But here, the Hypotenuse is equal to a side hence this triangle is not a right triangle but we don't know if angle ABC > 90.. Hence In-suff.
Please see the correction in blue above.
I pick B.
SVP
Joined: 29 Aug 2007
Posts: 2326
Re: C 15.24 degrees of a triangle [#permalink]
### Show Tags
17 Nov 2007, 01:28
jbs wrote:
Amit05 wrote:
bmwhype2 wrote:
Points A, B and C form a triangle. Is ABC > 90 degrees?
1. AC = AB + BC - .001
2. AC = AB
I think the answer is A.
S1 :
AC = AB + BC - .001
AC + .001 = AB + BC
Squaring B.S,
(AC + .001) ^ 2 = (AB + BC )^2
AC^2 + 2AC*.001 = AB^2 + 2AB.BC + BC^2
AC^2 = AB^2 + BC^2 + 2AB.BC + 2AC*.001
By Pythagoras Thm, Angle ABC is 90 if AC^2 = AB^2 + BC^2.
Here AC is bigger than that.. which implies that Angle ABC is > 90. Hence Suff.
PS : (I do not know of any formal rule/theorem that states that if Hypotenuse exceeds more than sum of the squares of the sides that the angle > 90. In fact that triangle is no more a right triangle but I just based this on my intuition. I just took an example of triangle with sides 3,4 and 5. )
St2 :
As AC is the Hypo which should be the longest side of a triangle. But here, the Hypotenuse is equal to a side hence this triangle is not a right triangle but we don't know if angle ABC > 90.. Hence In-suff.
I find myself inclined to agree with your logic about statement 1.
However, I find statement 2 to be sufficient by itself as well.
If ac=ab, then angle ABC = angle ACB.
Therefore angle ABC cannot be greater than 90.
Since AC = AB + BC - .001, what if BC = 0.001? then AC = AB again as in statement 2.
CEO
Joined: 21 Jan 2007
Posts: 2545
Location: New York City
Re: C 15.24 degrees of a triangle [#permalink]
### Show Tags
17 Nov 2007, 23:39
GMAT TIGER wrote:
jbs wrote:
Amit05 wrote:
bmwhype2 wrote:
Points A, B and C form a triangle. Is ABC > 90 degrees?
1. AC = AB + BC - .001
2. AC = AB
I think the answer is A.
S1 :
AC = AB + BC - .001
AC + .001 = AB + BC
Squaring B.S,
(AC + .001) ^ 2 = (AB + BC )^2
AC^2 + 2AC*.001 = AB^2 + 2AB.BC + BC^2
AC^2 = AB^2 + BC^2 + 2AB.BC + 2AC*.001
By Pythagoras Thm, Angle ABC is 90 if AC^2 = AB^2 + BC^2.
Here AC is bigger than that.. which implies that Angle ABC is > 90. Hence Suff.
PS : (I do not know of any formal rule/theorem that states that if Hypotenuse exceeds more than sum of the squares of the sides that the angle > 90. In fact that triangle is no more a right triangle but I just based this on my intuition. I just took an example of triangle with sides 3,4 and 5. )
St2 :
As AC is the Hypo which should be the longest side of a triangle. But here, the Hypotenuse is equal to a side hence this triangle is not a right triangle but we don't know if angle ABC > 90.. Hence In-suff.
I find myself inclined to agree with your logic about statement 1.
However, I find statement 2 to be sufficient by itself as well.
If ac=ab, then angle ABC = angle ACB.
Therefore angle ABC cannot be greater than 90.
Since AC = AB + BC - .001, what if BC = 0.001? then AC = AB again as in statement 2.
when dealing with triangles, i usually look for defined size and shape.
-.001 is a concrete size. however, we dont know whether that is a material size that can change the size of the sides of a triangle. From 1, we cannot infer anything.
CEO
Joined: 17 Nov 2007
Posts: 3412
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
### Show Tags
17 Nov 2007, 23:47
1
2
1. AC=AB+BC-0.001
this is the same as AC>AB+BC (common for triangles)
for example,
AC=1000.001, AB=500, BC=500 => ABC~180
AC=0.001, AB=500, BC=500.001 =>ABC~0
insuf.
2. AB=AC
ABC=ACB => 2ABC<180> ABC<90
suf.
B is correct
P.S if one can draw it solution will come easy.
CEO
Joined: 17 Nov 2007
Posts: 3412
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
### Show Tags
17 Nov 2007, 23:55
walker wrote:
AC=0.001, AB=500, BC=500.001 =>ABC~0
Manager
Joined: 09 Apr 2013
Posts: 192
Location: United States
Concentration: Finance, Economics
GMAT 1: 710 Q44 V44
GMAT 2: 740 Q48 V44
GPA: 3.1
WE: Sales (Mutual Funds and Brokerage)
### Show Tags
12 Apr 2013, 17:27
If Angle ABC is > 90, then AC has to be the hypotenuse.
With Point 1:
If AB is 1, and BC is 1, then AC would be 1.999, making it the hypotenuse
But if AB is .0006, and BC is .0007, then AC would be .0003, making it not the hypotenuse.
Because the .001 gives us no reference, we cannot conclude anything from Point 1 alone.
If AB = AC, then that means that there is no possible way that AC could be the hypotenuse since there is another side of equal length right next to it. Even if BC is infinitely small, it is still >0 and therefore ABC cannot be >90. Therefore, Point 2 is enough for us to disqualify it alone.
Manager
Joined: 10 Mar 2014
Posts: 186
### Show Tags
19 Apr 2014, 12:21
HI Bunnel,
Thanks.
Intern
Joined: 13 Dec 2013
Posts: 8
Re: If points A, B, and C form a triangle... [#permalink]
### Show Tags
24 May 2014, 20:41
1
bekerman wrote:
If points A, B, and C form a triangle, is angle ABC>90 degrees?
(1) AC=AB+BC−0.001
(2) AC=AB
M15-24 in GMATClub tests - I am wondering whether the OA is incorrect?
Statement-1:
AC = AB+ BC - .001,
If AB, BC are quite big numbers (greater than .01), then angle ABC would be greater than 90 degrees. But if length of AB, BC are in the same range of .001, then angle ABC could be acute angle also.
So statement 1 is not sufficient.
Statement -2:
AC= AB, it means angle ABC = angle ACB, now in any triangle sum all the angles is 180 degree, thus ABC +ACB+BAC = 180 degree. Now as ABC = ACB -> 2ABC + BAC = 180 -> ABC = 90 - BAC/2. Hence angle ABC is always less than 90 degree.
Statement 2 is sufficient
Director
Joined: 19 Apr 2013
Posts: 567
Concentration: Strategy, Healthcare
Schools: Sloan '18 (A)
GMAT 1: 730 Q48 V41
GPA: 4
Re: If points A, B, and C form a triangle, is angle ABC>90 degre [#permalink]
### Show Tags
05 Mar 2015, 10:53
Bunuel, can we also claim that when the angle us obtuse c will be greater than a and b?
_________________
If my post was helpful, press Kudos. If not, then just press Kudos !!!
Math Expert
Joined: 02 Sep 2009
Posts: 54543
Re: If points A, B, and C form a triangle, is angle ABC>90 degre [#permalink]
### Show Tags
05 Mar 2015, 11:28
Ergenekon wrote:
Bunuel, can we also claim that when the angle us obtuse c will be greater than a and b?
Yes, the greatest side is opposite the greatest angle.
_________________
Manager
Joined: 13 Dec 2013
Posts: 151
Location: United States (NY)
Schools: Cambridge"19 (A)
GMAT 1: 710 Q46 V41
GMAT 2: 720 Q48 V40
GPA: 4
WE: Consulting (Consulting)
Re: If points A, B, and C form a triangle, is angle ABC>90 degre [#permalink]
### Show Tags
10 May 2017, 12:28
Bunuel wrote:
PathFinder007 wrote:
HI Bunnel,
Thanks.
THEORY:
Say the lengths of the sides of a triangle are a, b, and c, where the largest side is c.
For a right triangle: $$a^2 +b^2= c^2$$.
For an acute (a triangle that has all angles less than 90°) triangle: $$a^2 +b^2>c^2$$.
For an obtuse (a triangle that has an angle greater than 90°) triangle: $$a^2 +b^2<c^2$$.
Points A, B and C form a triangle. Is ABC > 90 degrees?
(1) AC = AB + BC - 0.001.
If AC=0.001, AB=0.001 and BC=0.001, then the triangle will be equilateral, thus each of its angles will be 60 degrees.
If AC=10, AB=5 and BC=5.001, then AC^2>AB^2+BC^2, which means that angle ABC will be more than 90 degrees.
Not sufficient.
(2) AC = AB --> triangle ABC is an isosceles triangle --> angles B and C are equal, which means that angle B cannot be greater than 90 degrees. Sufficient.
Similar questions to practice:
http://gmatclub.com/forum/are-all-angle ... 29298.html
http://gmatclub.com/forum/if-10-12-and- ... 90462.html
Hope it's clear.
Hi Bunuel, to find an obtuse angle within the constraints set by 1) I did the following (in bold). Is this approach okay?
1) If AB + BC = 100, then angle ABC will be close to 180. This triangle is allowed because AC<AB+AC. I felt that this triangle allowed easier visualization of the obtuse angle.
And, as you stated if all sides = 0.001, then angle ABC will be 60.
2) Means that the triangle is isosceles and therefore has 2 equal angles.
2x+y=180
2x=180-y
Because y cannot be 0, x must be less than 90. Suff.
Senior Manager
Status: love the club...
Joined: 24 Mar 2015
Posts: 276
Re: If points A, B, and C form a triangle, is angle ABC>90 degre [#permalink]
### Show Tags
07 Feb 2018, 01:38
bmwhype2 wrote:
If points A, B, and C form a triangle, is angle ABC>90 degrees?
(1) AC = AB + BC − 0.001
(2) AC = AB
M15-24
hi
I don't know whether this is okay, but I tried this problem this way
statement 1 actually says that
AC < AB + BC
now, if we suppose that AC is the largest side, then it is true for any triangle, regardless of whether the triangle is acute or obtuse
so clearly insufficient
statement 2 says that the triangle is an isosceles triangle with its 2 angels equal
now, since the sum of all the angles of a triangle adds up to 180 degrees, angle B cannot even equal to 90 degrees let alone be more than 90 degrees
hence the statement 2 is clearly sufficient
thanks
Manager
Joined: 23 Nov 2016
Posts: 113
Re: If points A, B, and C form a triangle, is angle ABC>90 degre [#permalink]
### Show Tags
27 Mar 2019, 19:10
Experts can someone please explain, Seems like S1 is sufficient. His explanation seems logical .
Amit05 wrote:
bmwhype2 wrote:
Points A, B and C form a triangle. Is ABC > 90 degrees?
1. AC = AB + BC - .001
2. AC = AB
I think the answer is A.
S1 :
AC = AB + BC - .001
AC + .001 = AB + BC
Squaring B.S,
(AC + .001) ^ 2 = (AB + BC )^2
AC^2 + 2AC*.001 = AB^2 + 2AB.BC + BC^2
AC^2 = AB^2 + BC^2 + 2AB.BC + 2AC*.001
By Pythagoras Thm, Angle ABC is 90 if AC^2 = AB^2 + BC^2.
Here AC is bigger than that.. which implies that Angle ABC is > 90. Hence Suff.
PS : (I do not know of any formal rule/theorem that states that if Hypotenuse exceeds more than sum of the squares of the sides that the angle > 90. In fact that triangle is no more a right triangle but I just based this on my intuition. I just took an example of triangle with sides 3,4 and 5. )
St2 :
As AC is the Hypo which should be the longest side of a triangle. But here, the Hypotenuse is equal to a side hence this triangle is not a right triangle but we don't know if angle ABC > 90.. Hence In-suff.
_________________
If my post anyway helped you,please spare Kudos !
Re: If points A, B, and C form a triangle, is angle ABC>90 degre [#permalink] 27 Mar 2019, 19:10
Display posts from previous: Sort by
|
2019-04-25 14:42:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8565189242362976, "perplexity": 2299.044062700229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721468.57/warc/CC-MAIN-20190425134058-20190425155236-00041.warc.gz"}
|
http://regularize.wordpress.com/tag/sparsity-2/
|
With this post I delve into a topic which is somehow new to me, although I planned to look deeper into this for quite some time already. I stumbled upon the paper Gromov-Wasserstein distances and the metric approach to object matching by Facundo Mémoli which was a pleasure to read and motivated this post.
1. Comparing measures with norms and metrics
There are different notions in mathematics to compare two objects, think of the size of real numbers, the cardinality of sets or the length of the difference of two vectors. Here we will deal with not only comparison of objects but with “measures of similarity”. Two fundamental notions for this are norms in vector spaces and metrics. The norm is the stronger concept in that it uses more structure than a metric and also, every norm induces a metric but not the other way round. There are occasions in which both a norm and a metric are available but lead to different concepts of similarity. One of these instances occurs in sparse recovery, especially in the continuous formulation, e.g. as described in a previous post. Consider the unit interval ${I = [0,1]}$ and two Radon measures ${\mu_1}$ and ${\mu_2}$ on ${I}$ (${I}$ could also be an aritrary metric space). On the space of Radon measures ${\mathfrak{M}(I)}$ there is the variation norm
$\displaystyle \|\mu\|_{\mathfrak{M}}= \sup_\Pi\sum_{A\in\Pi}|\mu(A)|$
where the supremum is taken over all partitions ${\Pi}$ of ${I}$ into a finite number of measurable sets. Moreover, there are different metrics one can put on the space of Radon measures, e.g. the Prokhorov metric which is defined for two probability measures (e.g. non-negative ones with unit total mass)
$\displaystyle \begin{array}{rcl} d_P(\mu_1,\mu_2) & = & \inf\{\epsilon>0\ :\ \mu_1(A)\leq \mu_2(A^\epsilon) + \epsilon,\nonumber\\ & & \qquad \mu_2(A)\leq \mu_1(A^\epsilon) + \epsilon\ \text{for all measurable}\ A\} \end{array}$
where ${A^\epsilon}$ denotes the ${\epsilon}$-neighborhood of ${A}$. Another familiy of metrics are the Wasserstein metrics: For ${p\geq 1}$ define
$\displaystyle d_{W,p}(\mu_1,\mu_2) = \Big(\inf_\nu\int_{I\times I} |x-y|^p d\nu(x,y)\Big)^{1/p} \ \ \ \ \ (1)$
where the infimum is taken over all measure couplings of ${\mu_1}$ and ${\mu_2}$, that is, all measures ${\nu}$ on ${I\times I}$ such that for measurable ${A}$ it holds that
$\displaystyle \nu(A\times I) = \mu_1(A)\ \text{and}\ \nu(I\times A) = \mu_2(A).$
Example 1 We compare two Dirac measures ${\mu_1 = \delta_{x_1}}$ and ${\mu_2 = \delta_{x_2}}$ located at distinct points ${x_1\neq x_2}$ in ${I}$ as seen here:
The variation norm measures their distance as
$\displaystyle \|\mu_1-\mu_2\|_{\mathfrak{M}} = \sup_\Pi\sum_{A\in\Pi}|\delta_{x_1}(A) - \delta_{x_2}(A)| = 2$
(choose ${\Pi}$ such that it contains ${A_1}$ and ${A_2}$ small enough that ${x_1\in A_1}$, ${x_2\in A_2}$ but ${x_1\notin A_2}$ and ${x_2\notin A_1}$). The calculate the Prokhorov metric note that you only need to consider ${A}$‘s which contain only one of the points ${x_{1/2}}$ and hence, it evaluates to
$\displaystyle d_P(\mu_1,\mu_2) = |x_1-x_2|.$
For the Wasserstein metric we observe that there is only one possible measure coupling of ${\delta_{x_1}}$ and ${\delta_{x_2}}$, namely the measure ${\nu = \delta_{(x_1,x_2)}}$. Hence, we have
$\displaystyle d_{W,p}(\mu_1,\mu_2) = \Big(\int_{I\times I}|x-y|^pd\delta_{(x_1,x_2)}(x,y)\Big)^{1/p} = |x_1-x_2|.$
The variation norm distinguishes the two Diracs but is not able to grasp the distance of their supports. On the other hand, both metrics return the geometric distance of the supports in the underlying space ${I}$ as distance of the Diracs. Put in pictures: The variation norm of the difference measures the size ob this object
while both metrics capture the distance of the measures like here
It should not stay unnoted that convergence in both the Prokhorov metric and the Wasserstein metrics is exactly the weak convergence of probability measures.
The above example provides a motivation to study metric structures on spaces, even if they are also equipped with a norm. Another reason to shift attention from normed spaces to metric spaces is the fact that there has emerged a body of work to build a theory of analysis in metric spaces (see, e.g. this answer on mathoverflow or the book Gradient Flows: In Metric Spaces And In The Space Of Probability Measures by Ambrosio, Gigli and Savaré (which puts special emphasis on the space of probability measures)). Yet another motivation for the study of metrics in this way is the problem of comparing shapes (without being precisely defined yet): Which of these shapes look most alike?
(Note that shapes need not to be two dimensional figures, you may also think of more complex objects like surfaces in three dimensions or Riemannian manifolds.)
One may also ask the question how two compare different images defined on different shapes, i.e. different “distributions of colour” on two different shapes.
2. Comparing shapes: Metric spaces
Up to now we tried to compare different measures, defined on the same set. At least to me it seems that both the Prokhorov and the Wasserstein metrics are suited to measure the similarity of measures and in fact, they do so somehow finer than the usual norm does.
Let’s try to go one step further and ask ourselves, how we could compare two measures ${\mu_1}$ and ${\mu_2}$ which are defined on two different sets? While thinking about an answer one need to balance several things:
• The setup should be general enough to allow for the comparison of a wide range of objects.
• It should include enough structure to allow meaningful statements.
• It should lead to a measure which is easy enough to handle both analytically and computationally.
For the first and second bullet: We are going to work with measures not on arbitrary sets but on metric spaces. This will allow to measure distances between points in the sets and, as you probably know, does not pose a severe restriction. Although metric spaces are much more specific than topological spaces, we still aim at quantitative measures which are not provided by topologies. With respect to the last bullet: Note that both the Prokhorov and the Wasserstein metric are defined as infimums over fairly large and not too well structured sets (for the Prokhorov metric and need to consider all measurable sets and their ${\epsilon}$-neighborhoods, for the Wasserstein metric, one need to consider all measure couplings). While they can be handled quite well theoretically, their computational realization can be cumbersome.
In a similar spirit than Facundo Memoli’s paper we work our way up from comparing subsets of metric spaces up to comparing two different metric spaces with two measures defined on them.
2.1. Comparing compact subsets of a metric space: Hausdorff
Let ${(X,d)}$ be a compact metric space. Almost hundred years ago Hausdorff introduced a metric on the family of all non-empty compact subsets of a metric space as follows: The Hausdorff metric of two compact subsets ${A}$ and ${B}$ of ${X}$ is defined as
$\displaystyle d_H(A,B) = \inf\{\epsilon>0 \ :\ A\subset B_\epsilon,\ B \subset A_\epsilon\}$
(again, using the notion of ${\epsilon}$-neighborhood). This definition seems to be much in the spirit of the Prokhorov metric.
Proposition 2.1 in Facundo Memolis paper shows that the Hausdorff metric has an equivalent description as
$\displaystyle d_H(A,B) = \inf_R \sup_{(a,b) \in R} d(a,b)$
where the infimum is taken over all correspondences ${R}$ of ${A}$ and ${B}$, i.e., all subset ${R\subset A\times B}$ such that for all ${a\in A}$ there is ${b\in B}$ such that ${(a,b) \in R}$ and for all ${b\in B}$ there ${a\in A}$ such that ${(a,b)\in R}$. One may also say set coupling of ${A}$ and ${B}$ instead of correspondence.
Example 2 There is always the full coupling ${R = A\times B}$. Three different set couplings of two subsets ${A}$ and ${B}$ of the unit interval are shown here:
the “full one” ${A\times B}$ in green and two “slim” ones in red and orange. Other “slim” couplings can be obtained from surjective mappings ${f:A\rightarrow B}$ by ${R = \{(a,f(a))\ :\ a\in A\}}$ (or with the roles of ${A}$ and ${B}$ swapped): If you couple a set ${A}$ with itself, there is also the trivial coupling
$\displaystyle R = \{(a,a)\ : \ a\in A\}$
which is just the diagonal of ${A\times A}$
Note that the alternative definition of the Hausdorff metric is more in the spirit of the Wasserstein metric: It does not use enlarged objects (by ${\epsilon}$-neighborhoods) but couplings.
The Hausdorff metric is indeed a metric on the set ${\mathfrak{C}(X)}$ of all non-empty compact subsets of a metric space ${X}$ and if ${X}$ itself is compact it even holds that ${(\mathfrak{C}(X),d_H)}$ is a compact metric space (a result, known as Blaschke Selection Theorem).
One may say that we went up an abstraction ladder one step by moving from ${(X,d)}$ to ${(\mathfrak{C}(X),d_H)}$.
2.2. Comparing compact metric spaces: Gromov-Hausdorff
In the previous subsection we worked within one metric space ${X}$. In the book “Metric Structures for Riemannian and Non-Riemannian Spaces” Misha Gromov introduced a notion to compare two different metric spaces. For compact metric space ${X}$ and ${Y}$ the Gromov-Hausdorff metric is defined as
$\displaystyle d_{GH}(X,Y) = \inf_{Z,f,g} d_H(f(X),g(Y)) \ \ \ \ \ (2)$
where the infimum is taken over
• all metric spaces ${Z}$ and
• all isometric embeddings ${f}$ and ${g}$ which embed ${X}$ and ${Y}$ into ${Z}$ respectively.
In words: To compute the Gromov-Hausdorff metric, you try embed both ${X}$ and ${Y}$ into a common larger space isometrically such that they are as close as possible according to the Hausdorff metric in that space.
Strictly speaking, the above definition is not well stated as one can not form an infimum over all metric spaces since this collection does not form a set according to the rules of set theory. More precisely one should write that ${d_{GH}(X,Y)}$ is the infimum over all ${r>0}$ such that there exists a metric space ${Z}$ and isometric embeddings ${f}$ and ${g}$ of ${X}$ and ${Y}$, respectively, such that ${d_H(f(X),g(Y)).
As the Hausdorff metric could be reformulated with set couplings there is a reformulation of the Gromov-Hausdorff metric based on metric couplings: A metric coupling of two metric spaces ${(X,d_X)}$ and ${(Y,d_Y)}$ is a metric ${d}$ on the disjoint union ${X\sqcup Y}$ of ${X}$ and ${Y}$ such that for all ${x,x'\in X}$ and ${y,y'\in Y}$ it holds that ${d(x,x') = d_X(x,x')}$ and ${d(y,y') = d_Y(y,y')}$.
Example 3 We couple a metric space ${(X,d)}$ with itself. We denote with ${(X',d')}$ an identical copy of ${(X,d)}$ and look for a metric ${D}$ on ${X\times X'}$ that respects the metrics ${d}$ and ${d'}$ in the way a metric coupling has to.
To distinguish elements from ${X}$ and ${X'}$ we put a ${'}$ on all quantities from ${X'}$. Moreover, for ${x\in X}$ we denote by ${x'}$ its identical copy in ${X'}$ (and similarly for ${x'\in X'}$, ${x}$ is its identical twin). Then, for any ${\epsilon>0}$ we can define ${D_\epsilon(x,x') = D_\epsilon(x',x) = \epsilon}$ (i.e. the distance between any two identical twins is ${\epsilon}$. By the triangle inequality we get for ${x\in X}$ and ${y'\in X'}$ that ${D_\epsilon(x,y')}$ should fulfill
$\displaystyle D_\epsilon(x',y') - D_\epsilon(x',x) \leq D_\epsilon(x,y') \leq D_\epsilon(x,y) + D_\epsilon(y,y')$
and hence
$\displaystyle d(x,y) - \epsilon \leq D_\epsilon(x,y') \leq d(x,y) + \epsilon.$
Indeed we can choose ${D_\epsilon(x,y') = d(x,y)}$ if ${x\in X}$ and ${y'\in Y}$ leading to one specific metric coupling for any ${\epsilon}$. This couplings allow to distinguish identical twins and behave as a metric on the whole disjoint union. In the limiting case ${\epsilon\rightarrow 0}$ we do not obtain a metric but a semi-metric or pseudo-metric which is just the same as a metric but without the assumption that ${d(x,y) = 0}$ implies that ${x=y}$.
Example 4 The above example of a metric coupling of a metric space with itself was somehow “reproducing” the given metric as accurate as possible. There are also other couplings that put very different distances to points ${D(x,y')}$ and there is also a way to visualize metric couplings: When building the disjoint union of two metric spaces ${X}$ and ${Y}$, you can imagine this as isometrically embedding both in a larger metric space ${Z}$ in a non-overlapping way and obtain the metric coupling ${D}$ as the restriction of the metric on ${Z}$ to ${X\sqcup Y}$. For ${X=Y=[0,1]}$ you can embed both into ${Z = {\mathbb R}^2}$. A metric coupling which is similar (but not equal) to the coupling of the previous example is obtained by putting ${X}$ and ${Y}$ side by side at distance ${\epsilon}$ as here (one space in green, the other in blue).
A quite different coupling is obtained by putting ${X}$ and ${Y}$ side by side, but in a reversed way as here:
You may even embed them in a more weired way as here:
but remember that the embeddings has to be isometric, hence, distortions like here are not allowed.
This example illustrate that the idea of metric coupling is in similar spirit as of “embedding two spaces in a common larger one”.
With the notion of metric coupling, the Gromov-Hausdorff metric can be written as
$\displaystyle d_{GH}(X,Y) = \inf_{R,d} \sup_{(x,y)\in R} d(x,y) \ \ \ \ \ (3)$
where the infimum is taken over all set couplings ${R}$ of ${X}$ and ${Y}$ and all metric couplings ${d}$ of ${(X,d_X)}$ and ${(Y,d_Y)}$.
In words: To compute the Gromov-Hausdorff metric this way, you look for a set coupling of the base sets ${X}$ and ${Y}$ and a metric coupling ${d}$ of the metrics ${d_X}$ and ${d_Y}$ such that the maximal distance of two coupled points ${x}$ and ${y}$ is as small as possible. While this may look more complicated than the original definition from~(2), note that the original definition uses all metric spaces ${Z}$ in which you can embed ${X}$ and ${Y}$ isometrically, which seems barely impossible to realize. Granted, the new definition also considers a lot of quantities.
Also note that this definition is in spirit of the Wasserstein metric from~(1): If there were natural measures ${\mu_R}$ on the set couplings ${R}$ we could write \begin{equation*} d_{GH}(X,Y) = \inf_{R,d} \Big(\int d(x,y)^pd\mu_R\Big)^{1/p} \end{equation*} and in the limit ${p\rightarrow\infty}$ we would recover definition~(3).
Example 5 The Gromov-Hausdorff distance of a metric space ${(X,d_X)}$ to itself is easily seen to be zero: Consider the trivial coupling ${R = \{(x,x)\ :\ x\in X\}}$ from Example~2 and the family ${D_\epsilon}$ of metric couplings from Example~3. Then we have ${d_{GH}(X,X) \leq \epsilon}$ for any ${\epsilon >0}$ showing ${d_{GH}(X,X) = 0}$. Let’s take one of the next-complicated examples and compute the distance of ${X = [0,1]}$ and ${Y=[0,2]}$, both equipped with the euclidean metric. We couple the sets ${X}$ and ${Y}$ by ${R = \{(x,2x)\ : \ x\in X\}}$ and the respective metrics by embedding ${X}$ and ${Y}$ into ${{\mathbb R}^2}$ as follows: Put ${Y}$ at the line from ${(0,0)}$ to ${(2,0)}$ and ${X}$ at the line from ${(\tfrac12,\epsilon)}$ to ${(1\tfrac12,\epsilon)}$:
This shows that ${d_{GH}(X,Y) \leq \tfrac12}$ and actually, we have equality here.
There is another reformulation of the Gromov-Hausdorff metric, the equivalence of which is shown in Theorem 7.3.25 in the book “A Course in Metric Geometry” by Dmitri Burago, Yuri Burago and Sergei Ivanov:
$\displaystyle d_{GH}(X,Y) = \tfrac12\inf_R \sup_{\overset{\overset{x_{1/2}\in X}{y_{1/2}\in Y}}{(x_i,y_i)\in R}}\big| d_X(x_1,x_2) - d_Y(y_1,y_2)\big| \ \ \ \ \ (4)$
where the infimum is taken over all set couplings ${R}$ of ${X}$ and ${Y}$.
In words: Look for a set coupling such that any two coupled pairs ${(x_1,y_1)}$ and ${(x_2,y_2)}$ have the “most equal” distance.
This reformulation may have the advantage over the form (3) in that is only considers the set couplings and the given metrics ${d_X}$ and ${d_Y}$ and no metric coupling is needed.
Note that, as the previous reformulation~(3), it is also in the spirit of the Wasserstein metric: If there were natural measures ${\mu_R}$ in the set couplings ${R}$, we could write
$\displaystyle d_{GH}(X,Y) = \tfrac12\inf_R \Big(\int_{R\times R}\big| d_X(x_1,x_2) - d_Y(y_1,y_2)\big|^p d\mu_R(x_1,y_1)d\mu_R(x_2,y_2)\Big)^{1/p}.$
and recover the formulation~(4) in the limit ${p\rightarrow\infty}$.
One may say that we went up an abstraction ladder one step further by moving from ${(X,d)}$ to ${(\mathfrak{C}(X),d_H)}$ to ${(\text{All compact metric spaces},d_{GH})}$.
Since this post has been grown pretty long already, I decided to do the next step (which is the already announced metric on metric spaces which additionally carry some measure on them – so-called metric measure spaces) in a later post.
Let ${\Omega}$ be a compact subset of ${{\mathbb R}^d}$ and consider the space ${C(\Omega)}$ of continuous functions ${f:\Omega\rightarrow {\mathbb R}}$ with the usual supremum norm. The Riesz Representation Theorem states that the dual space of ${C(\Omega)}$ is in this case the set of all Radon measures, denoted by ${\mathfrak{M}(\Omega)}$ and the canonical duality pairing is given by
$\displaystyle \langle\mu,f\rangle = \mu(f) = \int_\Omega fd\mu.$
We can equip ${\mathfrak{M}(\Omega)}$ with the usual notion of weak* convergence which read as
$\displaystyle \mu_n\rightharpoonup^* \mu\ \iff\ \text{for every}\ f:\ \mu_n(f)\rightarrow\mu(f).$
We call a measure ${\mu}$ positive if ${f\geq 0}$ implies that ${\mu(f)\geq 0}$. If a positive measure satisfies ${\mu(1)=1}$ (i.e. it integrates the constant function with unit value to one), we call it a probability measure and we denote with ${\Delta\subset \mathfrak{M}(\Omega)}$ the set of all probability measures.
Example 1 Every non-negative integrable function ${\phi:\Omega\rightarrow{\mathbb R}}$ with ${\int_\Omega \phi(x)dx}$ induces a probability measure via
$\displaystyle f\mapsto \int_\Omega f(x)\phi(x)dx.$
Quite different probability measures are the ${\delta}$-measures: For every ${x\in\Omega}$ there is the ${\delta}$-measure at this point, defined by
$\displaystyle \delta_x(f) = f(x).$
In some sense, the set ${\Delta}$ of probability measure is the generalization of the standard simplex in ${{\mathbb R}^n}$ to infinite dimensions (in fact uncountably many dimensions): The ${\delta}$-measures are the extreme points of ${\Delta}$ and since the set ${\Delta}$ is compact in the weak* topology, the Krein-Milman Theorem states that ${\Delta}$ is the weak*-closure of the set of convex combinations of the ${\delta}$-measures – similarly as the standard simplex in ${{\mathbb R}^n}$ is the convex combination of the canonical basis vectors of ${{\mathbb R}^n}$.
Remark 1 If we drop the positivity assumption and form the set
$\displaystyle O = \{\mu\in\mathfrak{M}(\Omega)\ :\ |f|\leq 1\implies |\mu(f)|\leq 1\}$
we have the ${O}$ is the set of convex combinations of the measures ${\pm\delta_x}$ (${x\in\Omega}$). Hence, ${O}$ resembles the hyper-octahedron (aka cross polytope or ${\ell^1}$-ball).
I’ve taken the above (with almost similar notation) from the book “ A Course in Convexity” by Alexander Barvinok. I was curious to find (in Chapter III, Section 9) something which reads as a nice glimpse on semi-continuous compressed sensing: Proposition 9.4 reads as follows
Proposition 1 Let ${g,f_1,\dots,f_m\in C(\Omega)}$, ${b\in{\mathbb R}^m}$ and suppose that the subset ${B}$ of ${\Delta}$ consisting of the probability measures ${\mu}$ such that for ${i=1,\dots,m}$
$\displaystyle \int f_id\mu = b_i$
is not empty. Then there exists ${\mu^+,\mu^-\in B}$ such that
1. ${\mu^+}$ and ${\mu^-}$ are convex combinations of at most ${m+1}$ ${\delta}$-measures, and
2. it holds that for all ${\mu\in B}$ we have
$\displaystyle \mu^-(g)\leq \mu(g)\leq \mu^+(g).$
In terms of compressed sensing this says: Among all probability measures which comply with the data ${b}$ measured by ${m}$ linear measurements, there are two extremal ones which consists of ${m+1}$ ${\delta}$-measures.
Note that something similar to “support-pursuit” does not work here: The minimization problem ${\min_{\mu\in B, \mu(f_i)=b_i}\|\mu\|_{\mathfrak{M}}}$ does not make much sense, since ${\|\mu\|_{\mathfrak{M}}=1}$ for all ${\mu\in B}$.
ISMP is over now and I’m already home. I do not have many things to report on from the last day. This is not due the lower quality of the talks but due to the fact that I was a little bit exhausted, as usual at the end of a five-day conference. However, I collect a few things for the record:
• In the morning I visited the semi-planary by Xiaojun Chenon non-convex and non-smooth minimization with smoothing methods. Not surprisingly, she treated the problem
$\displaystyle \min_x f(x) + \|x\|_p^p$
with convex and smooth ${f:{\mathbb R}^n\rightarrow{\mathbb R}}$ and ${0. She proposed and analyzed smoothing methods, that is, to smooth the problem a bit to obtain a Lipschitz-continuous objective function ${\phi_\epsilon}$, minimizing this and then gradually decreasing ${\epsilon}$. This works, as she showed. If I remember correctly, she also treated “iteratively reweighted least squares” as I described in my previous post. Unfortunately, she did not include the generalized forward-backward methods based on ${\text{prox}}$-functions for non-convex functions. Kristian and I pursued this approach in our paper Minimization of non-smooth, non-convex functionals by iterative thresholding and some special features of our analysis include:
• A condition which excludes some (but not all) local minimizers from being global.
• An algorithm which avoids this non-global minimizers by carefully adjusting the steplength of the method.
• A result that the number of local minimizers is still finite, even if the problem is posed in ${\ell^2({\mathbb N})}$ and not in ${{\mathbb R}^n}$.
Most of our results hold true, if the ${p}$-quasi-norm is replaced by functions of the form
$\displaystyle \sum_n \phi_n(|x_n|)$
with special non-convex ${\phi}$, namely fulfilling a list of assumptions like
• ${\phi'(x) \rightarrow \infty}$ for ${x\rightarrow 0}$ (infinite slope at ${0}$) and ${\phi(x)\rightarrow\infty}$ for ${x\rightarrow\infty}$ (mild coercivity),
• ${\phi'}$ strictly convex on ${]0,\infty[}$ and ${\phi'(x)/x\rightarrow 0}$ for ${x\rightarrow\infty}$,
• for each ${b>0}$ there is ${a>0}$ such that for ${x it holds that ${\phi(x)>ax^2}$, and
• local integrability of some section of ${\partial\phi'(x) x}$.
As one easily sees, ${p}$-quasi-norms fulfill the assumptions and some other interesting functions as well (e.g. some with very steep slope at ${0}$ like ${x\mapsto \log(x^{1/3}+1)}$).
• Jorge Nocedalgave a talk on second-order methods for non-smooth problems and his main example was a functional like
$\displaystyle \min_x f(x) + \|x\|_1$
with a convex and smooth ${f}$, but different from Xiaojun Chen, he only considered the ${1}$-norm. His talked is among the best planary talks I have ever attended and it was a great pleasure to listen to him. He carefully explained things and put them in perspective. In the case he skipped slides, he made me feel that I either did not miss an important thing, or understood them even though he didn’t show them He argued that it is not necessarily more expensive to use second order information in contrast to first order methods. Indeed, the ${1}$-norm can be used to reduce the number of degrees of freedom for a second order step. What was pretty interesting is, that he advocated semismooth Newton methods for this problem. Roland and I pursued this approach some time ago in our paper A Semismooth Newton Method for Tikhonov Functionals with Sparsity Constraints and, if I remember correctly (my notes are not complete at this point), his family of methods included our ssn-method. The method Roland and I proposed worked amazingly well in the cases in which it converged but the method suffered from non-global convergence. We had some preliminary ideas for globalization, which we could not tune enough to retain the speed of the method, and abandoned the topic. Now, that the topic will most probably be revived by the community, I am looking forward to fresh ideas here.
Today I report on two things I came across here at ISMP:
• The first is a talk by Russell Luke on Constraint qualifications for nonconvex feasibility problems. Luke treated the NP-hard problem of sparsest solutions of linear systems. In fact he did not tackle this problem but the problem to find an ${s}$-sparse solution of an ${m\times n}$ system of equations. He formulated this as a feasibility-problem (well, Heinz Bauschke was a collaborator) as follows: With the usual malpractice let us denote by ${\|x\|_0}$ the number of non-zero entries of ${x\in{\mathbb R}^n}$. Then the problem of finding an ${s}$-sparse solution to ${Ax=b}$ is:
$\displaystyle \text{Find}\ x\ \text{in}\ \{\|x\|_0\leq s\}\cap\{Ax=b\}.$
In other words: find a feasible point, i.e. a point which lies in the intersection of the two sets. Well, most often feasibility problems involve convex sets but here, the first one given by this “${0}$-norm” is definitely not convex. One of the simplest algorithms for the convex feasibility problem is to alternatingly project onto both sets. This algorithm dates back to von Neumann and has been analyzed in great detail. To make this method work for non-convex sets one only needs to know how to project onto both sets. For the case of the equality constraint ${Ax=b}$ one can use numerical linear algebra to obtain the projection. The non-convex constraint on the number of non-zero entries is in fact even easier: For ${x\in{\mathbb R}^n}$ the projection onto ${\{\|x\|_0\leq s\}}$ consists of just keeping the ${s}$ largest entries of ${x}$ while setting the others to zero (known as the “best ${s}$-term approximation”). However, the theory breaks down in the case of non-convex sets. Russell treated problem in several papers (have a look at his publication page) and in the talk he focused on the problem of constraint qualification, i.e. what kind of regularity has to be imposed on the intersection of the two sets. He could shows that (local) linear convergence of the algorithm (which is observed numerically) can indeed be justified theoretically. One point which is still open is the phenomenon that the method seems to be convergent regardless of the initialization and that (even more surprisingly) that the limit point seems to be independent of the starting point (and also seems to be robust with respect to overestimating the sparsity ${s}$). I wondered if his results are robust with respect to inexact projections. For larger problems the projection onto the equality constraint ${Ax=b}$ are computationally expensive. For example it would be interesting to see what happens if one approximates the projection with a truncated CG-iteration as Andreas, Marc and I did in our paper on subgradient methods for Basis Pursuit.
• Joel Tropp reported on his paper Sharp recovery bounds for convex deconvolution, with applications together with Michael McCoy. However, in his title he used demixing instead of deconvolution (which, I think, is more appropriate and leads to less confusion). With “demixing” they mean the following: Suppose you have two signals ${x_0}$ and ${y_0}$ of which you observe only the superposition of ${x_0}$ and a unitarily transformed ${y_0}$, i.e. for a unitary matrix ${U}$ you observe
$\displaystyle z_0 = x_0 + Uy_0.$
Of course, without further assumptions there is no way to recover ${x_0}$ and ${y_0}$ from the knowledge of ${z_0}$ and ${U}$. As one motivation he used the assumption that both ${x_0}$ and ${y_0}$ are sparse. After the big bang of compressed sensing nobody wonders that one turns to convex optimization with ${\ell^1}$-norms in the following manner:
$\displaystyle \min_{x,y} \|x\|_1 + \lambda\|y\|_1 \ \text{such that}\ x + Uy = z_0. \ \ \ \ \ (1)$
This looks a lot like sparse approximation: Eliminating ${x}$ one obtains the unconstraint problem \begin{equation*} \min_y \|z_0-Uy\|_1 + \lambda \|y\|_1. \end{equation*}
Phrased differently, this problem aims at finding an approximate sparse solution of ${Uy=z_0}$ such that the residual (could also say “noise”) ${z_0-Uy=x}$ is also sparse. This differs from the common Basis Pursuit Denoising (BPDN) by the structure function for the residual (which is the squared ${2}$-norm). This is due to the fact that in BPDN one usually assumes Gaussian noise which naturally lead to the squared ${2}$-norm. Well, one man’s noise is the other man’s signal, as we see here. Tropp and McCoy obtained very sharp thresholds on the sparsity of ${x_0}$ and ${y_0}$ which allow for exact recovery of both of them by solving (1). One thing which makes their analysis simpler is the following reformulation: The treated the related problem \begin{equation*} \min_{x,y} \|x\|_1 \text{such that} \|y\|_1\leq\alpha, x+Uy=z_0 \end{equation*} (which I would call this the Ivanov version of the Tikhonov-problem (1)). This allows for precise exploitation of prior knowledge by assuming that the number ${\alpha_0 = \|y_0\|_1}$ is known.
First I wondered if this reformulation was responsible for their unusual sharp results (sharper the results for exact recovery by BDPN), but I think it’s not. I think this is due to the fact that they have this strong assumption on the “residual”, namely that it is sparse. This can be formulated with the help of ${1}$-norm (which is “non-smooth”) in contrast to the smooth ${2}$-norm which is what one gets as prior for Gaussian noise. Moreover, McCoy and Tropp generalized their result to the case in which the structure of ${x_0}$ and ${y_0}$ is formulated by two functionals ${f}$ and ${g}$, respectively. Assuming a kind of non-smoothness of ${f}$ and ${g}$ the obtain the same kind of results and especially matrix decomposition problems are covered.
The scientific program at ISMP started today and I planned to write a small personal summary of each day. However, it is a very intense meeting. Lot’s of excellent talks, lot’s of people to meet and little spare time. So I’m afraid that I have to deviate from my plan a little bit. Instead of a summary of every day I just pick out a few events. I remark that these picks do not reflect quality, significance or something like this in any way. I just pick things for which I have something to record for personal reasons.
My day started after the first plenary which the session Testing environments for machine learning and compressed sensing in which my own talk was located. The session started with the talk by Michael Friedlander of the SPOT toolbox. Haven’t heard of SPOT yet? Take a look! In a nutshell its a toolbox which turns MATLAB into “OPLAB”, i.e. it allows to treat abstract linear operators like matrices. By the way, the code is on github.
The second talk was by Katya Scheinberg (who is giving a semi-planary talk on derivative free optimization at the moment…). She talked about speeding up FISTA by cleverly adjusting step-sizes and over-relaxation parameters and generalizing these ideas to other methods like alternating direction methods. Notably, she used the “SPEAR test instances” from our project homepage! (And credited them as “surprisingly hard sparsity problems”.)
My own talk was the third and last one in that session. I talked about the issue of constructing test instance for Basis Pursuit Denoising. I argued that the naive approach (which takes a matrix ${A}$, a right hand side ${b}$ and a parameter ${\lambda}$ and let some great solver run for a while to obtain a solution ${x^*}$) may suffer from “trusted method bias”. I proposed to use “reverse instance construction” which is: First choose ${A}$, ${\lambda}$ and the solution ${x^*}$ and the construct the right hand side ${b}$ (I blogged on this before here).
Last but not least, I’d like to mention the talk by Thomas Pock: He talked about parameter selection on variational models (think of the regularization parameter in Tikhonov, for example). In a paper with Karl Kunisch titled A bilevel optimization approach for parameter learning in variational models they formulated this as a bi-level optimization problem. An approach which seemed to have been overdue! Although they treat somehow simple inverse problems (well, denoising) (but with not so easy regularizers) it is a promising first step in this direction.
In this post I just collect a few papers that caught my attention in the last moth.
I begin with Estimating Unknown Sparsity in Compressed Sensing by Miles E. Lopes. The abstract reads
Within the framework of compressed sensing, many theoretical guarantees for signal reconstruction require that the number of linear measurements ${n}$ exceed the sparsity ${\|x\|_0}$ of the unknown signal ${x\in\mathbb{R}^p}$. However, if the sparsity ${\|x\|_0}$ is unknown, the choice of ${n}$ remains problematic. This paper considers the problem of estimating the unknown degree of sparsity of ${x}$ with only a small number of linear measurements. Although we show that estimation of ${\|x\|_0}$ is generally intractable in this framework, we consider an alternative measure of sparsity ${s(x):=\frac{\|x\|_1^2}{\|x\|_2^2}}$, which is a sharp lower bound on ${\|x\|_0}$, and is more amenable to estimation. When ${x}$ is a non-negative vector, we propose a computationally efficient estimator ${\hat{s}(x)}$, and use non-asymptotic methods to bound the relative error of ${\hat{s}(x)}$ in terms of a finite number of measurements. Remarkably, the quality of estimation is dimension-free, which ensures that ${\hat{s}(x)}$ is well-suited to the high-dimensional regime where ${n<. These results also extend naturally to the problem of using linear measurements to estimate the rank of a positive semi-definite matrix, or the sparsity of a non-negative matrix. Finally, we show that if no structural assumption (such as non-negativity) is made on the signal ${x}$, then the quantity ${s(x)}$ cannot generally be estimated when ${n<.
It’s a nice combination of the observation that the quotient ${s(x)}$ is a sharp lower bound for ${\|x\|_0}$ and that it is possible to estimate the one-norm and the two norm of a vector ${x}$ (with additional properties) from carefully chosen measurements. For a non-negative vector ${x}$ you just measure with the constant-one vector which (in a noisy environment) gives you an estimate of ${\|x\|_1}$. Similarly, measuring with Gaussian random vector you can obtain an estimate of ${\|x\|_2}$.
Then there is the dissertation of Dustin Mixon on the arxiv: Sparse Signal Processing with Frame Theory which is well worth reading but too long to provide a short overview. Here is the abstract:
Many emerging applications involve sparse signals, and their processing is a subject of active research. We desire a large class of sensing matrices which allow the user to discern important properties of the measured sparse signal. Of particular interest are matrices with the restricted isometry property (RIP). RIP matrices are known to enable efficient and stable reconstruction of sfficiently sparse signals, but the deterministic construction of such matrices has proven very dfficult. In this thesis, we discuss this matrix design problem in the context of a growing field of study known as frame theory. In the first two chapters, we build large families of equiangular tight frames and full spark frames, and we discuss their relationship to RIP matrices as well as their utility in other aspects of sparse signal processing. In Chapter 3, we pave the road to deterministic RIP matrices, evaluating various techniques to demonstrate RIP, and making interesting connections with graph theory and number theory. We conclude in Chapter 4 with a coherence-based alternative to RIP, which provides near-optimal probabilistic guarantees for various aspects of sparse signal processing while at the same time admitting a whole host of deterministic constructions.
By the way, the thesis is dedicated “To all those who never dedicated a dissertation to themselves.”
Further we have Proximal Newton-type Methods for Minimizing Convex Objective Functions in Composite Form by Jason D Lee, Yuekai Sun, Michael A. Saunders. This paper extends the well explored first order methods for problem of the type ${\min g(x) + h(x)}$ with Lipschitz-differentiable ${g}$ or simple ${\mathrm{prox}_h}$ to second order Newton-type methods. The abstract reads
We consider minimizing convex objective functions in composite form
$\displaystyle \min_{x\in\mathbb{R}^n} f(x) := g(x) + h(x)$
where ${g}$ is convex and twice-continuously differentiable and ${h:\mathbb{R}^n\rightarrow\mathbb{R}}$ is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. Many problems of relevance in high-dimensional statistics, machine learning, and signal processing can be formulated in composite form. We prove such methods are globally convergent to a minimizer and achieve quadratic rates of convergence in the vicinity of a unique minimizer. We also demonstrate the performance of such methods using problems of relevance in machine learning and high-dimensional statistics.
With this post I say goodbye for a few weeks of holiday.
Today I would like to comment on two arxiv-preprints I stumbled upon:
1. “Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm” – The Elastic Net rediscovered
The paper “Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm” by Ming-Jun Lai and Wotao Yin is another contribution to a field which is (or was?) probably the fastest growing field in applied mathematics: Algorithms for convex problems with non-smooth ${\ell^1}$-like terms. The “mother problem” here is as follows: Consider a matrix ${A\in{\mathbb R}^{m\times n}}$, ${b\in{\mathbb R}^m}$ try to find a solution of
$\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1\quad\text{s.t.}\quad Ax=b$
or, for ${\sigma>0}$
$\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1\quad\text{s.t.}\quad \|Ax-b\|\leq\sigma$
which appeared here on this blog previously. Although this is a convex problem and even has a reformulation as linear program, some instances of this problem are notoriously hard to solve and gained a lot of attention (because their applicability in sparse recovery and compressed sensing). Very roughly speaking, a part of its hardness comes from the fact that the problem is neither smooth nor strictly convex.
The contribution of Lai and Yin is that they analyze a slight perturbation of the problem which makes its solution much easier: They add another term in the objective; for ${\alpha>0}$ they consider
$\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1 + \frac{1}{2\alpha}\|x\|_2^2\quad\text{s.t.}\quad Ax=b$
or
$\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1 + \frac{1}{2\alpha}\|x\|_2^2\quad\text{s.t.}\quad \|Ax-b\|\leq\sigma.$
This perturbation does not make the problem smooth but renders it strongly convex (which usually makes the dual more smooth). It turns out that this perturbation makes life with this problem (and related ones) much easier – recovery guarantees still exists and algorithms behave better.
I think it is important to note that the “augmentation” of the ${\ell^1}$ objective with an additional squared ${\ell^2}$-term goes back to Zou and Hastie from the statistics community. There, the motivation was as follows: They observed that the pure ${\ell^1}$ objective tends to “overpromote” sparsity in the sense that if there are two columns in ${A}$ which are almost equally good in explaining some component of ${b}$ then only one of them is used. The “augmented problem”, however, tends to use both of them. They coined the method as “elastic net” (for reasons which I never really got).
I also worked on elastic-net problems for problems in the form
$\displaystyle \min_x \frac{1}{2}\|Ax-b\|^2 + \alpha\|x\|_1 + \frac{\beta}{2}\|x\|_2^2$
in this paper (doi-link). Here it also turns out that the problem gets much easier algorithmically. I found it very convenient to rewrite the elastic-net problem as
$\displaystyle \min_x \frac{1}{2}\|\begin{bmatrix}A\\ \sqrt{\beta} I\end{bmatrix}x-\begin{bmatrix}b\\ 0\end{bmatrix}\|^2 + \alpha\|x\|_1$
which turns the elastic-net problem into just another ${\ell^1}$-penalized problem with a special matrix and right hand side. Quite convenient for analysis and also somehow algorithmically.
2. Towards a Mathematical Theory of Super-Resolution
The second preprint is “Towards a Mathematical Theory of Super-Resolution” by Emmanuel Candes and Carlos Fernandez-Granda.
The idea of super-resolution seems to pretty old and, very roughly speaking, is to extract a higher resolution of a measured quantity (e.g. an image) than the measured data allows. Of course, in this formulation this is impossible. But often one can gain something by additional knowledge of the image. Basically, this also is the idea behind compressed sensing and hence, it does not come as a surprise that the results in compressed sensing are used to try to explain when super-resolution is possible.
The paper by Candes and Fernandez-Granada seems to be pretty close in spirit to Exact Reconstruction using Support Pursuit on which I blogged earlier. They model the sparse signal as a Radon measure, especially as a sum of Diracs. However, different from the support-pursuit-paper they use complex exponentials (in contrast to real polynomials). Their reconstruction method is basically the same as support pursuit: The try to solve
$\displaystyle \min_{x\in\mathcal{M}} \|x\|\quad\text{s.t.}\quad Fx=y, \ \ \ \ \ (1)$
i.e. they minimize over the set of Radon measures ${\mathcal{M}}$ under the constraint that certain measurements ${Fx\in{\mathbb R}^n}$ result in certain given values ${y}$. Moreover, they make a thorough analysis of what is “reconstructable” by their ansatz and obtain a lower bound on the distance of two Diracs (in other words, a lower bound in the Prokhorov distance). I have to admit that I do not share one of their claims from the abstract: “We show that one can super-resolve these point sources with infinite precision—i.e. recover the exact locations and amplitudes—by solving a simple convex program.” My point is that I can not see to what extend the problem (1) is a simple one. Well, it is convex, but it does not seem to be simple.
I want to add that the idea of “continuous sparse modelling” in the space of signed measures is very appealing to me and appeared first in Inverse problems in spaces of measures by Kristian Bredies and Hanna Pikkarainen.
Today I write about sparse recovery of “multidimensional” signal. With “multidimensional” I mean something like this: A one-dimensional signal is a vector ${x\in{\mathbb R}^n}$ while a two-dimensional signal is a matrix ${x\in{\mathbb R}^{n_1\times n_2}}$. Similarly, ${d}$-dimensional signal is ${x\in{\mathbb R}^{n_1\times\cdots\times n_d}}$. Of course, images as two-dimensional signals, come time mind. Moreover, movies are three-dimensional, a hyperspectral 2D image (which has a whole spectrum attached to any pixel) is also three-dimensional), and time-dependent volume-data is four-dimensional.
Multidimensional data is often a challenge due the large amount of data. While the size of the signals is usually not the problem it is more the size of the measurement matrices. In the context of compressed sensing or sparse recovery the signal is measured with a linear operator, i.e. one applies a number ${m}$ of linear functionals to the signal. In the ${d}$-dimensional case this can be encoded as a matrix ${A\in {\mathbb R}^{m\times \prod_1^d n_i}}$ and this is where the trouble with the data comes in: If you have megapixel image (which is still quite small) the matrix has a million of columns and if you have a dense matrix, storage becomes an issue.
One approach (which is indeed quite old) to tackle this problem is, to consider special measurement matrices: If the signal has a sparse structure is every slice, i.e. every vectors of the form ${x(i_1,\dots,i_{k-1},:,i_{k+1},\dots,i_d)}$ where we fix all but the ${k}$-th component, then the Kronecker product of measurement matrices for each dimension is the right thing.
The Kronecker product of two matrices ${A\in{\mathbb R}^{m\times n}}$ and ${B\in{\mathbb R}^{k\times j}}$ is the ${mk\times nj}$ matrix
$\displaystyle A\otimes B = \begin{bmatrix} a_{11}B & \dots & a_{1n}B\\ \vdots & & \vdots\\ a_{n1}B & \dots & a_{nn}B \end{bmatrix}.$
This has a lot to do with the tensor product and you should read the Wikipedia entry. Moreover, it is numerically advantageous not to build the Kronecker product of dense matrices if you only want to apply it to a given signal. To see this, we introduce the vectorization operator ${\text{vec}:{\mathbb R}^{m\times n}\rightarrow{\mathbb R}^{nm}}$ which takes a matrix ${X}$ and stacks its columns into a tall column vector. For matrices ${A}$ and ${B}$ (of fitting sizes) it holds that
$\displaystyle (B^T\otimes A)\text{vec}(X) = \text{vec}(AXB).$
So, multiplying ${X}$ from the left and from the right gives the application of the Kronecker product.
The use of Kronecker products in numerical linear algebra is fairly old (for example they are helpful for multidimensional finite difference schemes where you can build Kronecker products of sparse difference operators in respective dimensions). Recently, they have been discovered for compressed sensing and sparse recovery in these two papers: Sparse solutions to underdetermined Kronecker product systems by Sadegh Jokar and Volker Mehrmann and the more recent Kronecker Compressed Sensing by Marco Duarte and Rich Baraniuk.
From these papers you can extract some interestingly simple and nice theorems:
Theorem 1 For matrices ${A_1,\dots A_d}$ with restricted isometry constant ${\delta_K(A_1),\dots,\delta_K(A_d)}$ of order ${K}$ it holds that the restricted isometry constant of the Kronecker product fulfills
$\displaystyle \max_i \delta_K(A_i) \leq \delta_K(A_1\otimes\cdots\otimes A_d) \leq \prod_1^d (1+\delta_K(A_i))-1.$
Basically, the RIP constant of a Kronecker product is not better than the worst one but still not too large.
Theorem 2 For matrices ${A_1,\dots, A_d}$ with columns normalized to one, it hold that the spark of their Kronecker product fulfills
$\displaystyle \text{spark}(A\otimes \dots\otimes A_d) = = \min_i\text{spark}(A_i).$
Theorem 3 For matrices ${A_1,\dots, A_d}$ with columns normalized to one, it hold that the mutual coherence of their Kronecker product fulfills
$\displaystyle \mu(A_1\otimes\dots\otimes A_d) = \max_i \mu(A_i).$
If you want to have a sparse solution to a linear system of equation and have heard of compressed sensing or sparse reconstruction than you probably know what to do: Get one of the many solvers for Basis Pursuit and be happy.
Basis Pursuit was designed as a convex approximation of the generally intractable problem of finding the sparsest solution (that is, the solution with the smallest number of non-zero entries). By abuse of notation, we define for ${x\in\mathbb{R}^n}$
$\displaystyle \|x\|_0 = \#\{i\ : x_i\neq 0\}.$
(Because of ${\|x\|_0 = \lim_{p\rightarrow 0}\|x\|_p^p}$ some people prefer the, probably more correct but also more confusing, notation ${\|x\|_0^0}$…).
Then, the sparsest solution of ${Ax=b}$ is the solution of
$\displaystyle \min_x \|x\|_0,\quad \text{s.t.}\ Ax=b$
and Basis Pursuit replaces ${\|x\|_0}$ with “the closest convex proxy”, i.e.
$\displaystyle \min_x \|x\|_1,\quad\text{s.t.}\ Ax=b.$
The good thing about Basis Pursuit suit is, that is really gives the sparsest solution under appropriate conditions as is widely known nowadays. Here I’d like to present two simple examples in which the Basis Pursuit solution is
• not even close to the sparsest solution (by norm).
• not sparse at all.
We can build a bad matrix for Basis Pursuit, even in the case ${2\times 3}$: For a small ${\epsilon>0}$ define
$\displaystyle A = \begin{bmatrix} \epsilon & 1 & 0\\ \epsilon & 0 & 1 \end{bmatrix}, \quad b = \begin{bmatrix} 1\\1 \end{bmatrix}.$
Of course, the sparsest solution is
$\displaystyle x_0 = \begin{bmatrix} 1/\epsilon\\ 0\\ 0\end{bmatrix}$
while the solution of Basis Pursuit is
$\displaystyle x_1 = \begin{bmatrix} 0\\1\\1 \end{bmatrix}.$
The summarize: For ${\epsilon<1/2}$
$\displaystyle \|x_0\|_0 = 1 < 2 = \|x_1\|_0,\quad \|x_0\|_1 = 1/\epsilon > 2 = \|x_1\|_1.$
(There is also a least squares solution that has three non-zero entries and a one-norm slightly larger than 2.)
Granted, this matrix is stupid. Especially, its first column has a very small norm compared to the others. Ok, let’s construct a matrix with normalized columns.
2. A small bad matrix with normalized columns
Fix an integer ${n}$ and a small ${\epsilon>0}$. We define a ${n\times(n+2)}$-matrix
$\displaystyle \begin{bmatrix} 1+\epsilon/2 & -1+\epsilon/2 & 1 & 0 & \dots & \dots &0\\ -1+\epsilon/2 & 1+\epsilon/2 & 0 & 1 & \ddots & & 0\\ \epsilon/2 & \epsilon/2 & \vdots & \ddots& \ddots & \ddots& \vdots\\ \vdots & \vdots & \vdots & & \ddots & \ddots& 0\\ \epsilon/2 & \epsilon/2 & 0 & \dots& \dots& 0 & 1 \end{bmatrix}.$
Ok, the first two columns do not have norm 1 yet, so we normalize them by multiplying with the right constant
$\displaystyle c = \frac{1}{\sqrt{2 + \tfrac{n\epsilon^2}{4}}}$
(which is close to ${1/\sqrt{2}}$) to get
$\displaystyle A = \begin{bmatrix} c(1+\epsilon/2) & c(-1+\epsilon/2) & 1 & 0 & \dots & \dots &0\\ c(-1+\epsilon/2) & c(1+\epsilon/2) & 0 & 1 & \ddots & & 0\\ c\epsilon/2 & c\epsilon/2 & \vdots & \ddots& \ddots & \ddots& \vdots\\ \vdots & \vdots & \vdots & & \ddots & \ddots& 0\\ c\epsilon/2 & c\epsilon/2 & 0 & \dots& \dots& 0 & 1 \end{bmatrix}.$
Now we take the right hand side
$\displaystyle b = \begin{bmatrix} 1\\\vdots\\1 \end{bmatrix}$
and see what solutions to ${Ax=b}$ are there.
First, there is the least squares solution ${x_{\text{ls}} = A^\dagger b}$. This has only non-zero entries, the last ${n}$ entries are slightly smaller than ${1}$ and the first two are between ${0}$ and ${1}$, hence, ${\|x_{\text{ls}}\|_1 \approx n}$ (in fact, slightly larger).
Second, there is a very sparse solution
$\displaystyle x_0 = \frac{1}{\epsilon c} \begin{bmatrix} 1\\ 1\\ 0\\ \vdots\\ 0 \end{bmatrix}.$
This has two non-zero entries and a pretty large one-norm: ${\|x_0\|_1 = 2/(\epsilon c)}$.
Third there is a solution with small one-norm:
$\displaystyle x_1 = \begin{bmatrix} 0\\ 0\\ 1\\ \vdots\\ 1 \end{bmatrix}.$
We have ${n}$ non-zero entries and ${\|x_1\|_1 = n}$. You can check that this ${x_1}$ is also the unique Basis Pursuit solution (e.g. by observing that ${A^T[1,\dots,1]^T}$ is an element of ${\partial\|x_1\|_1}$ and that the first two entries in ${A^T[1,\dots,1]^T}$ are strictly smaller than 1 and positive – put differently, the vector ${[1,\dots,1]^T}$ is dual certificate for ${x_1}$).
To summarize, for ${\epsilon < \sqrt{\frac{8}{n^2-n}}}$ it holds that
$\displaystyle \|x_0\|_0 = 2 < n = \|x_1\|_0,\quad \|x_0\|_1 = 2/(c\epsilon) > n = \|x_1\|_1.$
The geometric idea behind this matrix is as follows: We take ${n}$ simple normalized columns (the identity part in ${A}$) which sum up to the right hand side ${b}$. Then we take two normalized vectors which are almost orthogonal to ${b}$ but have ${b}$ in their span (but one needs huge factors here to obtain ${b}$).
Well, this matrix looks very artificial and indeed it’s constructed for one special purpose: To show that minimal ${\ell^1}$-norm solution are not always sparse (even when a sparse solution exists). It’s some kind of a hobby for me to construct instances for sparse reconstruction with extreme properties and I am thinking about a kind of “gallery” of these instances (probably extending the “gallery” command in Matlab).
By the way: if you want to play around with this matrix, here is the code
n = 100;
epsilon = sqrt(8/(n^2-n))+0.1;
c = 1/sqrt(2+n*epsilon^2/4);
A = zeros(n,n+2);
A(1:2,1:2) = ([1 -1;-1,1]+epsilon/2)*c;
A(3:n,1:2) = epsilon/2*c;
A(1:n,3:n+2) = eye(n);
b = ones(n,1);
How many samples are needed to reconstruct a sparse signal?
Well, there are many, many results around some of which you probably know (at least if you are following this blog or this one). Today I write about a neat result which I found quite some time ago on reconstruction of nonnegative sparse signals from a semi-continuous perspective.
1. From discrete sparse reconstruction/compressed sensing to semi-continuous
The basic sparse reconstruction problem asks the following: Say we have a vector ${x\in{\mathbb R}^m}$ which only has ${s non-zero entries and a fat matrix ${A\in{\mathbb R}^{n\times m}}$ (i.e. ${n>m}$) and consider that we are given measurements ${b=Ax}$. Of course, the system ${Ax=b}$ is underdetermined. However, we may add a little more prior knowledge on the solution and ask: Is is possible to reconstruct ${x}$ from ${b}$ if we know that the vector ${x}$ is sparse? If yes: How? Under what conditions on ${m}$, ${s}$, ${n}$ and ${A}$? This question created the expanding universe of compressed sensing recently (and this universe is expanding so fast that for sure there has to be some dark energy in it). As a matter of fact, a powerful method to obtain sparse solutions to underdetermined systems is ${\ell^1}$-minimization a.k.a. Basis Pursuit on which I blogged recently: Solve
$\displaystyle \min_x \|x\|_1\ \text{s.t.}\ Ax=b$
and the important ingredient here is the ${\ell^1}$-norm of the vector in the objective function.
In this post I’ll formulate semi-continuous sparse reconstruction. We move from an ${m}$-vector ${x}$ to a finite signed measure ${\mu}$ on a closed interval (which we assume to be ${I=[-1,1]}$ for simplicty). We may embed the ${m}$-vectors into the space of finite signed measures by choosing ${m}$ points ${t_i}$, ${i=1,\dots, m}$ from the interval ${I}$ and build ${\mu = \sum_{i=1}^m x_i \delta_{t_i}}$ with the point-masses (or Dirac measures) ${\delta_{t_i}}$. To a be a bit more precise, we speak about the space ${\mathfrak{M}}$ of Radon measures on ${I}$, which are defined on the Borel ${\sigma}$-algebra of ${I}$ and are finite. Radon measures are not very scary objects and an intuitive way to think of them is to use Riesz representation: Every Radon measure arises as a continuous linear functional on a space of continuous functions, namely the space ${C_0(I)}$ which is the closure of the continuous functions with compact support in ${{]{-1,1}[}}$ with respect to the supremum norm. Hence, Radon measures work on these functions as ${\int_I fd\mu}$. It is also natural to speak of the support ${\text{supp}(\mu)}$ of a Radon measure ${\mu}$ and it holds for any continuous function ${f}$ that
$\displaystyle \int_I f d\mu = \int_{\text{supp}(\mu)}f d\mu.$
An important tool for Radon measures is the Hahn-Jordan decomposition which decomposes ${\mu}$ into a positive part ${\mu^+}$ and a negative part ${\mu^-}$, i.e. ${\mu^+}$ and ${\mu^-}$ are non-negative and ${\mu = \mu^+-\mu^-}$. Finally the variation of a measure, which is
$\displaystyle \|\mu\| = \mu^+(I) + \mu^-(I)$
provides a norm on the space of Radon measures.
Example 1 For the measure ${\mu = \sum_{i=1}^m x_i \delta_{t_i}}$ one readily calculates that
$\displaystyle \mu^+ = \sum_i \max(0,x_i)\delta_{t_i},\quad \mu^- = \sum_i \max(0,-x_i)\delta_{t_i}$
and hence
$\displaystyle \|\mu\| = \sum_i |x_i| = \|x\|_1.$
In this sense, the space of Radon measures provides a generalization of ${\ell^1}$.
We may sample a Radon measure ${\mu}$ with ${n+1}$ linear functionals and these can be encoded by ${n+1}$ continuous functions ${u_0,\dots,u_n}$ as
$\displaystyle b_k = \int_I u_k d\mu.$
This sampling gives a bounded linear operator ${K:\mathfrak{M}\rightarrow {\mathbb R}^{n+1}}$. The generalization of Basis Pursuit is then given by
$\displaystyle \min_{\mu\in\mathfrak{M}} \|\mu\|\ \text{s.t.}\ K\mu = b.$
This was introduced and called “Support Pursuit” in the preprint Exact Reconstruction using Support Pursuit by Yohann de Castro and Frabrice Gamboa.
More on the motivation and the use of Radon measures for sparsity can be found in Inverse problems in spaces of measures by Kristian Bredies and Hanna Pikkarainen.
2. Exact reconstruction of sparse nonnegative Radon measures
Before I talk about the results we may count the degrees of freedom a sparse Radon measure has: If ${\mu = \sum_{i=1}^s x_i \delta_{t_i}}$ with some ${s}$ than ${\mu}$ is defined by the ${s}$ weights ${x_i}$ and the ${s}$ positions ${t_i}$. Hence, we expect that at least ${2s}$ linear measurements should be necessary to reconstruct ${\mu}$. Surprisingly, this is almost enough if we know that the measure is nonnegative! We only need one more measurement, that is ${2s+1}$ and moreover, we can take fairly simple measurements, namely the monomials: ${u_i(t) = t^i}$ ${i=0,\dots, n}$ (with the convention that ${u_0(t)\equiv 1}$). This is shown in the following theorem by de Castro and Gamboa.
Theorem 1 Let ${\mu = \sum_{i=1}^s x_i\delta_{t_i}}$ with ${x_i\geq 0}$, ${n=2s}$ and let ${u_i}$, ${i=0,\dots n}$ be the monomials as above. Define ${b_i = \int_I u_i(t)d\mu}$. Then ${\mu}$ is the unique solution of the support pursuit problem, that is of
$\displaystyle \min \|\nu\|\ \text{s.t.}\ K\nu = b.\qquad \textup{(SP)}$
Proof: The following polynomial will be of importance: For a constant ${c>0}$ define
$\displaystyle P(t) = 1 - c \prod_{i=1}^s (t-t_i)^2.$
The following properties of ${P}$ will be used:
1. ${P(t_i) = 1}$ for ${i=1,\dots,s}$
2. ${P}$ has degree ${n=2s}$ and hence, is a linear combination of the ${u_i}$, ${i=0,\dots,n}$, i.e. ${P = \sum_{k=0}^n a_k u_k}$.
3. For ${c}$ small enough it holds for ${t\neq t_i}$ that ${|P(t)|<1}$.
Now let ${\sigma}$ be a solution of (SP). We have to show that ${\|\mu\|\leq \|\sigma\|}$. Due to property 2 we know that
$\displaystyle \int_I u_k d\sigma = (K\sigma)k = b_k = \int_I u_k d\mu.$
Due to property 1 and non-negativity of ${\mu}$ we conclude that
$\displaystyle \begin{array}{rcl} \|\mu\| & = & \sum_{i=1}^s x_i = \int_I P d\mu\\ & = & \int_I \sum_{k=0}^n a_k u_k d\mu\\ & = & \sum_{k=0}^n a_k \int_I u_k d\mu\\ & = & \sum_{k=0}^n a_k \int_I u_k d\sigma\\ & = & \int_I P d\sigma. \end{array}$
Moreover, by Lebesgue’s decomposition we can decompose ${\sigma}$ with respect to ${\mu}$ such that
$\displaystyle \sigma = \underbrace{\sum_{i=1}^s y_i\delta_{t_i}}_{=\sigma_1} + \sigma_2$
and ${\sigma_2}$ is singular with respect to ${\mu}$. We get
$\displaystyle \begin{array}{rcl} \int_I P d\sigma = \sum_{i=1}^s y_i + \int P d\sigma_2 \leq \|\sigma_1\| + \|\sigma_2\|=\|\sigma\| \end{array}$
and we conclude that ${\|\sigma\| = \|\mu\|}$ and especially ${\int_I P d\sigma_2 = \|\sigma_2\|}$. This shows that ${\mu}$ is a solution to ${(SP)}$. It remains to show uniqueness. We show the following: If there is a ${\nu\in\mathfrak{M}}$ with support in ${I\setminus\{x_1,\dots,x_s\}}$ such that ${\int_I Pd\nu = \|\nu\|}$, then ${\nu=0}$. To see this, we build, for any ${r>0}$, the sets
$\displaystyle \Omega_r = [-1,1]\setminus \bigcup_{i=1}^s ]x_i-r,x_i+r[.$
and assume that there exists ${r>0}$ such that ${\|\nu|_{\Omega_r}\|\neq 0}$ (${\nu|_{\Omega_r}}$ denoting the restriction of ${\nu}$ to ${\Omega_r}$). However, it holds by property 3 of ${P}$ that
$\displaystyle \int_{\Omega_r} P d\nu < \|\nu|_{\Omega_r}\|$
and consequently
$\displaystyle \begin{array}{rcl} \|\nu\| &=& \int Pd\nu = \int_{\Omega_r} Pd\nu + \int_{\Omega_r^C} P d\nu\\ &<& \|\nu|_{\Omega_r}\| + \|\nu|_{\Omega_r^C}\| = \|\nu\| \end{array}$
which is a contradiction. Hence, ${\nu|_{\Omega_r}=0}$ for all ${r}$ and this implies ${\nu=0}$. Since ${\sigma_2}$ has its support in ${I\setminus\{x_1,\dots,x_s\}}$ we conclude that ${\sigma_2=0}$. Hence the support of ${\sigma}$ is exactly ${\{x_1,\dots,x_s\}}$. and since ${K\sigma = b = K\mu}$ and hence ${K(\sigma-\mu) = 0}$. This can be written as a Vandermonde system
$\displaystyle \begin{pmatrix} u_0(t_1)& \dots &u_0(t_s)\\ \vdots & & \vdots\\ u_n(t_1)& \dots & u(t_s) \end{pmatrix} \begin{pmatrix} y_1 - x_1\\ \vdots\\ y_s - x_s \end{pmatrix} = 0$
which only has the zero solution, giving ${y_i=x_i}$. $\Box$
3. Generalization to other measurements
The measurement by monomials may sound a bit unusual. However, de Castro and Gamboa show more. What really matters here is that the monomials for a so-called Chebyshev-System (or Tchebyscheff-system or T-system – by the way, have you ever tried to google for a T-system?). This is explained, for example in the book “Tchebycheff Systems: With Applications in Analysis and Statistics” by Karlin and Studden. A T-system on ${I}$ is simply a set of ${n+1}$ functions ${\{u_0,\dots, u_n\}}$ such that any linear combination of these functions has at most ${n}$ zeros. These systems are called after Tchebyscheff since they obey many of the helpful properties of the Tchebyscheff-polynomials.
What is helpful in our context is the following theorem of Krein:
Theorem 2 (Krein) If ${\{u_0,\dots,u_n\}}$ is a T-system for ${I}$, ${k\leq n/2}$ and ${t_1,\dots,t_k}$ are in the interior of ${I}$, then there exists a linear combination ${\sum_{k=0}^n a_k u_k}$ which is non-negative and vanishes exactly the the point ${t_i}$.
Now consider that we replace the monomials in Theorem~1 by a T-system. You recognize that Krein’s Theorem allows to construct a “generalized polynomial” which fulfills the same requirements than the polynomial ${P}$ is the proof of Theorem~1 as soon as the constant function 1 lies in the span of the T-system and indeed the result of Theorem~1 is also valid in that case.
4. Exact reconstruction of ${s}$-sparse nonnegative vectors from ${2s+1}$ measurements
From the above one can deduce a reconstruction result for ${s}$-sparse vectors and I quote Theorem 2.4 from Exact Reconstruction using Support Pursuit:
Theorem 3 Let ${n}$, ${m}$, ${s}$ be integers such that ${s\leq \min(n/2,m)}$ and let ${\{1,u_1,\dots,u_n\}}$ be a complete T-system on ${I}$ (that is, ${\{1,u_1,\dots,u_r\}}$ is a T-system on ${I}$ for all ${r). Then it holds: For any distinct reals ${t_1,\dots,t_m}$ and ${A}$ defined as
$\displaystyle A=\begin{pmatrix} 1 & \dots & 1\\ u_1(t_1)& \dots &u_1(t_m)\\ \vdots & & \vdots\\ u_n(t_1)& \dots & u(t_m) \end{pmatrix}$
Basis Pursuit recovers all nonnegative ${s}$-sparse vectors ${x\in{\mathbb R}^m}$.
5. Concluding remarks
Note that Theorem~3 gives a deterministic construction of a measurement matrix.
Also note, that nonnegativity is crucial in what we did here. This allowed (in the monomial case) to work with squares and obtain the polynomial ${P}$ in the proof of Theorem~1 (which is also called “dual certificate” in this context). This raises the question how this method can be adapted to all sparse signals. One needs (in the monomial case) a polynomial which is bounded by 1 but matches the signs of the measure on its support. While this can be done (I think) for polynomials it seems difficult to obtain a generalization of Krein’s Theorem to this case…
|
2014-04-23 12:03:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 644, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9549866318702698, "perplexity": 475.8647931390163}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://mail-archives.apache.org/mod_mbox/flink-commits/201506.mbox/%3C4932d0b0a1864be89f3c7e04190f552a@git.apache.org%3E
|
##### Site index · List index
Message view
Top
From fhue...@apache.org
Date Fri, 12 Jun 2015 13:33:08 GMT
[FLINK-2072] [ml] Adds quickstart guide
This closes #792.
Commit: 40e2df5acf9385cc3c6e3a947b4bf6cd2bd375b3
Parents: af0fee5
Author: Theodore Vasiloudis <tvas@sics.se>
Authored: Fri Jun 5 11:09:11 2015 +0200
Committer: Fabian Hueske <fhueske@apache.org>
Committed: Fri Jun 12 14:27:29 2015 +0200
----------------------------------------------------------------------
docs/libs/ml/contribution_guide.md | 10 +-
docs/libs/ml/index.md | 27 ++--
docs/libs/ml/quickstart.md | 216 +++++++++++++++++++++++++++++++-
3 files changed, 235 insertions(+), 18 deletions(-)
----------------------------------------------------------------------
----------------------------------------------------------------------
diff --git a/docs/libs/ml/contribution_guide.md b/docs/libs/ml/contribution_guide.md
index 89f05c0..f0754cb 100644
--- a/docs/libs/ml/contribution_guide.md
+++ b/docs/libs/ml/contribution_guide.md
@@ -36,7 +36,7 @@ Everything from this guide also applies to FlinkML.
## Pick a Topic
-If you are looking for some new ideas, then you should check out the list of [unresolved
then you should check out the list of [unresolved issues on JIRA](https://issues.apache.org/jira/issues/?jql=component%20%3D%20%22Machine%20Learning%20Library%22%20AND%20project%20%3D%20FLINK%20AND%20resolution%20%3D%20Unresolved%20ORDER%20BY%20priority%20DESC).
Once you decide to contribute to one of these issues, you should take ownership of it and
track your progress with this issue.
That way, the other contributors know the state of the different issues and redundant work
is avoided.
@@ -61,7 +61,7 @@ Thus, an integration test could look the following:
{% highlight scala %}
class ExampleITSuite extends FlatSpec with FlinkTestBase {
behavior of "An example algorithm"
-
+
it should "do something" in {
...
}
@@ -81,12 +81,12 @@ Every new algorithm is described by a single markdown file.
This file should contain at least the following points:
1. What does the algorithm do
-2. How does the algorithm work (or reference to description)
+2. How does the algorithm work (or reference to description)
3. Parameter description with default values
4. Code snippet showing how the algorithm is used
In order to use latex syntax in the markdown file, you have to include mathjax: include
in the YAML front matter.
-
+
{% highlight java %}
---
mathjax: include
@@ -103,4 +103,4 @@ See docs/_include/latex_commands.html for the complete list of predefined
late
## Contributing
Once you have implemented the algorithm with adequate test coverage and added documentation,
you are ready to open a pull request.
-Details of how to open a pull request can be found [here](http://flink.apache.org/how-to-contribute.html#contributing-code--documentation).
+Details of how to open a pull request can be found [here](http://flink.apache.org/how-to-contribute.html#contributing-code--documentation).
----------------------------------------------------------------------
diff --git a/docs/libs/ml/index.md b/docs/libs/ml/index.md
index de9137d..9ff7a4b 100644
--- a/docs/libs/ml/index.md
+++ b/docs/libs/ml/index.md
@@ -21,9 +21,9 @@ under the License.
-->
FlinkML is the Machine Learning (ML) library for Flink. It is a new effort in the Flink community,
-with a growing list of algorithms and contributors. With FlinkML we aim to provide
-scalable ML algorithms, an intuitive API, and tools that help minimize glue code in end-to-end
ML
-systems. You can see more details about our goals and where the library is headed in our
[vision
+with a growing list of algorithms and contributors. With FlinkML we aim to provide
+scalable ML algorithms, an intuitive API, and tools that help minimize glue code in end-to-end
ML
+systems. You can see more details about our goals and where the library is headed in our
[vision
* This will be replaced by the TOC
@@ -55,10 +55,13 @@ FlinkML currently supports the following algorithms:
## Getting Started
-Next, you have to add the FlinkML dependency to the pom.xml of your project.
+You can check out our [quickstart guide](quickstart.html) for a comprehensive getting started
+example.
-{% highlight bash %}
+Next, you have to add the FlinkML dependency to the pom.xml of your project.
+
+{% highlight xml %}
<dependency>
@@ -85,12 +88,11 @@ mlr.fit(trainingData, parameters)
val predictions: DataSet[LabeledVector] = mlr.predict(testingData)
{% endhighlight %}
-For a more comprehensive guide, please check out our [quickstart guide](quickstart.html)
-
## Pipelines
A key concept of FlinkML is its [scikit-learn](http://scikit-learn.org) inspired pipelining
mechanism.
It allows you to quickly build complex data analysis pipelines how they appear in every data
scientist's daily work.
+An in-depth description of FlinkML's pipelines and their internal workings can be found [here](pipelines.html).
The following example code shows how easy it is to set up an analysis pipeline with FlinkML.
@@ -110,13 +112,14 @@ pipeline.fit(trainingData)
// Calculate predictions
val predictions: DataSet[LabeledVector] = pipeline.predict(testingData)
-{% endhighlight %}
+{% endhighlight %}
One can chain a Transformer to another Transformer or a set of chained Transformers
by calling the method chainTransformer.
-If one wants to chain a Predictor to a Transformer or a set of chained Transformers,
one has to call the method chainPredictor.
-An in-depth description of FlinkML's pipelines and their internal workings can be found [here](pipelines.html).
+If one wants to chain a Predictor to a Transformer or a set of chained Transformers,
one has to call the method chainPredictor.
+
## How to contribute
The Flink community welcomes all contributors who want to get involved in the development
[contribution guide]({{site.baseurl}}/libs/ml/contribution_guide.html).
\ No newline at end of file
+[contribution guide]({{site.baseurl}}/libs/ml/contribution_guide.html).
----------------------------------------------------------------------
diff --git a/docs/libs/ml/quickstart.md b/docs/libs/ml/quickstart.md
index b8501f8..f5d7451 100644
--- a/docs/libs/ml/quickstart.md
+++ b/docs/libs/ml/quickstart.md
@@ -1,4 +1,5 @@
---
+mathjax: include
title: <a href="../ml">FlinkML</a> - Quickstart Guide
---
@@ -24,4 +25,217 @@ under the License.
* This will be replaced by the TOC
{:toc}
-Coming soon.
+## Introduction
+
+FlinkML is designed to make learning from your data a straight-forward process, abstracting
away
+the complexities that usually come with big data learning tasks. In this
+quick-start guide we will show just how easy it is to solve a simple supervised learning
problem
+using FlinkML. But first some basics, feel free to skip the next few lines if you're already
+familiar with Machine Learning (ML).
+
+As defined by Murphy [[1]](#murphy) ML deals with detecting patterns in data, and using those
+learned patterns to make predictions about the future. We can categorize most ML algorithms
into
+two major categories: Supervised and Unsupervised Learning.
+
+* **Supervised Learning** deals with learning a function (mapping) from a set of inputs
+(features) to a set of outputs. The learning is done using a *training set* of (input,
+output) pairs that we use to approximate the mapping function. Supervised learning problems
are
+further divided into classification and regression problems. In classification problems we
try to
+predict the *class* that an example belongs to, for example whether a user is going to click
on
+an ad or not. Regression problems one the other hand, are about predicting (real) numerical
+values, often called the dependent variable, for example what the temperature will be tomorrow.
+
+* **Unsupervised Learning** deals with discovering patterns and regularities in the data.
An example
+of this would be *clustering*, where we try to discover groupings of the data from the
+descriptive features. Unsupervised learning can also be used for feature selection, for example
+through [principal components analysis](https://en.wikipedia.org/wiki/Principal_component_analysis).
+
+
+In order to use FlinkML in your project, first you have to
+Next, you have to add the FlinkML dependency to the pom.xml of your project:
+
+{% highlight xml %}
+<dependency>
+ <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+
+To load data to be used with FlinkML we can use the ETL capabilities of Flink, or specialized
+functions for formatted data, such as the LibSVM format. For supervised learning problems
it is
+common to use the LabeledVector class to represent the (label, features) examples. A
LabeledVector
+object will have a FlinkML Vector member representing the features of the example and a
Double
+member which represents the label, which could be the class in a classification problem,
or the dependent
+variable for a regression problem.
+
+As an example, we can use Haberman's Survival Data Set , which you can
+This dataset *"contains cases from a study conducted on the survival of patients who had
undergone
+surgery for breast cancer"*. The data comes in a comma-separated file, where the first 3
columns
+are the features and last column is the class, and the 4th column indicates whether the patient
+survived 5 years or longer (label 1), or died within 5 years (label 2). You can check the
[UCI
on the data.
+
+We can load the data as a DataSet[String] first:
+
+{% highlight scala %}
+
+
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+val survival = env.readCsvFile[(String, String, String, String)]("/path/to/haberman.data")
+
+{% endhighlight %}
+
+We can now transform the data into a DataSet[LabeledVector]. This will allow us to use
the
+dataset with the FlinkML classification algorithms. We know that the 4th element of the dataset
+is the class label, and the rest are features, so we can build LabeledVector elements like
this:
+
+{% highlight scala %}
+
+
+val survivalLV = survival
+ .map{tuple =>
+ val list = tuple.productIterator.toList
+ val numList = list.map(_.asInstanceOf[String].toDouble)
+ LabeledVector(numList(3), DenseVector(numList.take(3).toArray))
+ }
+
+{% endhighlight %}
+
+We can then use this data to train a learner. We will however use another dataset to exemplify
+building a learner; that will allow us to show how we can import other dataset formats.
+
+**LibSVM files**
+
+A common format for ML datasets is the LibSVM format and a number of datasets using that
format can be
+found [in the LibSVM datasets website](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/).
+datasets using the LibSVM format through the readLibSVM function available through the
MLUtils
+object.
+You can also save datasets in the LibSVM format using the writeLibSVM function.
+[training set here](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/svmguide1)
+and the [test set here](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/svmguide1.t).
+This is an astroparticle binary classification dataset, used by Hsu et al. [[3]](#hsu) in
their
+practical Support Vector Machine (SVM) guide. It contains 4 numerical features, and the class
label.
+
+We can simply import the dataset then using:
+
+{% highlight scala %}
+
+
+
+{% endhighlight %}
+
+This gives us two DataSet[LabeledVector] objects that we will use in the following section
to
+create a classifier.
+
+## Classification
+
+Once we have imported the dataset we can train a Predictor such as a linear SVM classifier.
+We can set a number of parameters for the classifier. Here we set the Blocks parameter,
+which is used to split the input by the underlying CoCoA algorithm [[2]](#jaggi) uses. The
+regularization parameter determines the amount of $l_2$ regularization applied, which is
used
+to avoid overfitting. The step size determines the contribution of the weight vector updates
to
+the next weight vector value. This parameter sets the initial step size.
+
+{% highlight scala %}
+
+
+val svm = SVM()
+ .setBlocks(env.getParallelism)
+ .setIterations(100)
+ .setRegularization(0.001)
+ .setStepsize(0.1)
+ .setSeed(42)
+
+svm.fit(astroTrain)
+
+{% endhighlight %}
+
+We can now make predictions on the test set.
+
+{% highlight scala %}
+
+val predictionPairs = svm.predict(astroTest)
+
+{% endhighlight %}
+
+Next we will see how we can pre-process our data, and use the ML pipelines capabilities of
+
+## Data pre-processing and pipelines
+
+A pre-processing step that is often encouraged [[3]](#hsu) when using SVM classification
is scaling
+the input features to the [0, 1] range, in order to avoid features with extreme values
+dominating the rest.
+FlinkML has a number of Transformers such as MinMaxScaler that are used to pre-process
data,
+and a key feature is the ability to chain Transformers and Predictors together. This
allows
+us to run the same pipeline of transformations and make predictions on the train and test
data in
+a straight-forward and type-safe manner. You can read more on the pipeline system of FlinkML
+[in the pipelines documentation](pipelines.html).
+
+Let us first create a normalizing transformer for the features in our dataset, and chain
it to a
+new SVM classifier.
+
+{% highlight scala %}
+
+
+val scaler = MinMaxScaler()
+
+val scaledSVM = scaler.chainPredictor(svm)
+
+{% endhighlight %}
+
+We can now use our newly created pipeline to make predictions on the test set.
+First we call fit again, to train the scaler and the SVM classifier.
+The data of the test set will then be automatically scaled before being passed on to the
SVM to
+make predictions.
+
+{% highlight scala %}
+
+scaledSVM.fit(astroTrain)
+
+val predictionPairsScaled: DataSet[(Double, Double)] = scaledSVM.predict(astroTest)
+
+{% endhighlight %}
+
+The scaled inputs should give us better prediction performance.
+The result of the prediction on LabeledVectors is a data set of tuples where the first
entry denotes the true label value and the second entry is the predicted label value.
+
+## Where to go from here
+
+This quickstart guide can act as an introduction to the basic concepts of FlinkML, but there's
a lot
+more you can do.
+We recommend going through the [FlinkML documentation](index.html), and trying out the different
+algorithms.
+A very good way to get started is to play around with interesting datasets from the UCI ML
+repository and the LibSVM datasets.
+Tackling an interesting problem from a website like [Kaggle](https://www.kaggle.com) or
+[DrivenData](http://www.drivendata.org/) is also a great way to learn by competing with other
+data scientists.
+If you would like to contribute some new algorithms take a look at our
+[contribution guide](contribution_guide.html).
+
+**References**
+
+<a name="murphy"></a>[1] Murphy, Kevin P. *Machine learning: a probabilistic
perspective.* MIT
+press, 2012.
+
+<a name="jaggi"></a>[2] Jaggi, Martin, et al. *Communication-efficient distributed
dual
+coordinate ascent.* Advances in Neural Information Processing Systems. 2014.
+
+<a name="hsu"></a>[3] Hsu, Chih-Wei, Chih-Chung Chang, and Chih-Jen Lin.
+ *A practical guide to support vector classification.* 2003.
Mime
View raw message
|
2018-04-25 21:19:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3868689239025116, "perplexity": 6861.841796738667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947957.81/warc/CC-MAIN-20180425193720-20180425213720-00457.warc.gz"}
|
https://rust-reference.budshome.com/procedural-macros.html
|
Procedural Macros
Procedural macros allow creating syntax extensions as execution of a function. Procedural macros come in one of three flavors:
Procedural macros allow you to run code at compile time that operates over Rust syntax, both consuming and producing Rust syntax. You can sort of think of procedural macros as functions from an AST to another AST.
Procedural macros must be defined in a crate with the crate type of proc-macro.
Note: When using Cargo, Procedural macro crates are defined with the proc-macro key in your manifest:
[lib]
proc-macro = true
As functions, they must either return syntax, panic, or loop endlessly. Returned syntax either replaces or adds the syntax depending on the kind of procedural macro. Panics are caught by the compiler and are turned into a compiler error. Endless loops are not caught by the compiler which hangs the compiler.
Procedural macros run during compilation, and thus have the same resources that the compiler has. For example, standard input, error, and output are the same that the compiler has access to. Similarly, file access is the same. Because of this, procedural macros have the same security concerns that Cargo’s build scripts have.
Procedural macros have two ways of reporting errors. The first is to panic. The second is to emit a compile_error macro invocation.
The proc_macro crate
Procedural macro crates almost always will link to the compiler-provided proc_macro crate. The proc_macro crate provides types required for writing procedural macros and facilities to make it easier.
This crate primarily contains a TokenStream type. Procedural macros operate over token streams instead of AST nodes, which is a far more stable interface over time for both the compiler and for procedural macros to target. A token stream is roughly equivalent to Vec<TokenTree> where a TokenTree can roughly be thought of as lexical token. For example foo is an Ident token, . is a Punct token, and 1.2 is a Literal token. The TokenStream type, unlike Vec<TokenTree>, is cheap to clone.
All tokens have an associated Span. A Span is an opaque value that cannot be modified but can be manufactured. Spans represent an extent of source code within a program and are primarily used for error reporting. You can modify the Span of any token.
Procedural macro hygiene
Procedural macros are unhygienic. This means they behave as if the output token stream was simply written inline to the code it’s next to. This means that it’s affected by external items and also affects external imports.
Macro authors need to be careful to ensure their macros work in as many contexts as possible given this limitation. This often includes using absolute paths to items in libraries (for example, ::std::option::Option instead of Option) or by ensuring that generated functions have names that are unlikely to clash with other functions (like __internal_foo instead of foo).
Function-like procedural macros
Function-like procedural macros are procedural macros that are invoked using the macro invocation operator (!).
These macros are defined by a public function with the proc_macro attribute and a signature of (TokenStream) -> TokenStream. The input TokenStream is what is inside the delimiters of the macro invocation and the output TokenStream replaces the entire macro invocation.
For example, the following macro definition ignores its input and outputs a function answer into its scope.
#![crate_type = "proc-macro"]
extern crate proc_macro;
use proc_macro::TokenStream;
#[proc_macro]
pub fn make_answer(_item: TokenStream) -> TokenStream {
"fn answer() -> u32 { 42 }".parse().unwrap()
}
And then we use it a binary crate to print “42” to standard output.
extern crate proc_macro_examples;
fn main() {
}
Function-like procedural macros may expand to a type or any number of items, including macro_rules definitions. They may be invoked in a type expression, item position (except as a statement), including items in extern blocks, inherent and trait implementations, and trait definitions. They cannot be used in a statement, expression, or pattern.
Derive macros
Derive macros define new inputs for the derive attribute. These macros can create new items given the token stream of a struct, enum, or union. They can also define derive macro helper attributes.
Custom derive macros are defined by a public function with the proc_macro_derive attribute and a signature of (TokenStream) -> TokenStream.
The input TokenStream is the token stream of the item that has the derive attribute on it. The output TokenStream must be a set of items that are then appended to the module or block that the item from the input TokenStream is in.
The following is an example of a derive macro. Instead of doing anything useful with its input, it just appends a function answer.
#![crate_type = "proc-macro"]
extern crate proc_macro;
use proc_macro::TokenStream;
pub fn derive_answer_fn(_item: TokenStream) -> TokenStream {
"fn answer() -> u32 { 42 }".parse().unwrap()
}
And then using said derive macro:
extern crate proc_macro_examples;
struct Struct;
fn main() {
}
Derive macro helper attributes
Derive macros can add additional attributes into the scope of the item they are on. Said attributes are called derive macro helper attributes. These attributes are inert, and their only purpose is to be fed into the derive macro that defined them. That said, they can be seen by all macros.
The way to define helper attributes is to put an attributes key in the proc_macro_derive macro with a comma separated list of identifiers that are the names of the helper attributes.
For example, the following derive macro defines a helper attribute helper, but ultimately doesn’t do anything with it.
#![crate_type="proc-macro"]
extern crate proc_macro;
use proc_macro::TokenStream;
#[proc_macro_derive(HelperAttr, attributes(helper))]
pub fn derive_helper_attr(_item: TokenStream) -> TokenStream {
TokenStream::new()
}
And then usage on the derive macro on a struct:
#[derive(HelperAttr)]
struct Struct {
#[helper] field: ()
}
Attribute macros
Attribute macros define new outer attributes which can be attached to items, including items in extern blocks, inherent and trait implementations, and trait definitions.
Attribute macros are defined by a public function with the proc_macro_attribute attribute that has a signature of (TokenStream, TokenStream) -> TokenStream. The first TokenStream is the delimited token tree following the attribute’s name, not including the outer delimiters. If the attribute is written as a bare attribute name, the attribute TokenStream is empty. The second TokenStream is the rest of the item including other attributes on the item. The returned TokenStream replaces the item with an arbitrary number of items.
For example, this attribute macro takes the input stream and returns it as is, effectively being the no-op of attributes.
#![crate_type = "proc-macro"]
extern crate proc_macro;
use proc_macro::TokenStream;
#[proc_macro_attribute]
pub fn return_as_is(_attr: TokenStream, item: TokenStream) -> TokenStream {
item
}
This following example shows the stringified TokenStreams that the attribute macros see. The output will show in the output of the compiler. The output is shown in the comments after the function prefixed with “out:”.
// my-macro/src/lib.rs
extern crate proc_macro;
use proc_macro::TokenStream;
#[proc_macro_attribute]
pub fn show_streams(attr: TokenStream, item: TokenStream) -> TokenStream {
println!("attr: \"{}\"", attr.to_string());
println!("item: \"{}\"", item.to_string());
item
}
// src/lib.rs
extern crate my_macro;
use my_macro::show_streams;
// Example: Basic function
#[show_streams]
fn invoke1() {}
// out: attr: ""
// out: item: "fn invoke1() { }"
// Example: Attribute with input
#[show_streams(bar)]
fn invoke2() {}
// out: attr: "bar"
// out: item: "fn invoke2() {}"
// Example: Multiple tokens in the input
#[show_streams(multiple => tokens)]
fn invoke3() {}
// out: attr: "multiple => tokens"
// out: item: "fn invoke3() {}"
// Example:
#[show_streams { delimiters }]
fn invoke4() {}
// out: attr: "delimiters"
// out: item: "fn invoke4() {}"
|
2020-03-31 12:45:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.338420033454895, "perplexity": 8135.88729477384}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00369.warc.gz"}
|
http://zbmath.org/?q=an:1166.65338
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
A class of two-step Steffensen type methods with fourth-order convergence. (English) Zbl 1166.65338
Summary: Based on Steffensen’s method, we derive a one-parameter class of fourth-order methods for solving nonlinear equations. In the proposed methods, an interpolating polynomial is used to get a better approximation to the derivative of the given function. Each member of the class requires three evaluations of the given function per iteration. Therefore, this class of methods has efficiency index which equals 1.587. H. T. Kung and J. F. Traub conjectured an iteration using $n$ evaluations of $f$ or its derivatives without memory is of convergence order at most $2n-1$ [J. Assoc. Comput. Mach. 21, 643–651 (1974; Zbl 0289.65023)]. The new class of fourth-order methods agrees with the conjecture of Kung-Traub for the case $n=3$. Numerical comparisons are made to show the performance of the presented methods.
##### MSC:
65H05 Single nonlinear equations (numerical methods)
##### References:
[1] Ortega, J. M.; Rheinbolt, W. C.: Iterative solution of nonlinear equations in several variables, (1970) · Zbl 0241.65046 [2] Argyros, I. K.: Convergence and application of Newton-type iterations, (2008) [3] Johnson, L. W.; Riess, R. D.: Numerical analysis, (1977) · Zbl 0412.65001 [4] Dehghan, M.; Hajarian, M.: A variant of Steffensen’s method with a better approximation to the derivative, Appl. math. Comput. (2008) [5] Jain, P.: Steffensen type methods for solving non-linear equations, Appl. math. Comput. 194, 527-533 (2007) · Zbl 1193.65063 · doi:10.1016/j.amc.2007.04.087 [6] Sharma, J. R.: A composite third order Newton – Steffensen method for solving nonlinear equations, Appl. math. Comput. 169, 242-246 (2005) · Zbl 1084.65054 · doi:10.1016/j.amc.2004.10.040 [7] Amat, S.; Busquier, S.: Convergence and numerical analysis of a family of two-step Steffensen’s methods, Comput. math. Appl. 49, 13-22 (2005) · Zbl 1075.65080 · doi:10.1016/j.camwa.2005.01.002 [8] Amat, S.; Busquier, S.: A two-step Steffensen’s method under modified convergence conditions, J. math. Anal. appl. 324, 1084-1092 (2006) · Zbl 1103.65060 · doi:10.1016/j.jmaa.2005.12.078 [9] Amat, S.; Busquier, S.: On a Steffensen’s type method and its behavior for semismooth equations, Appl. math. Comput. 177, 819-823 (2006) · Zbl 1096.65047 · doi:10.1016/j.amc.2005.11.032 [10] Alarcn, V.; Amat, S.; Busquier, S.; Lpez, D. J.: A Steffensen’s type method in Banach spaces with applications on boundary-value problems, J. comput. Appl. math. 216, 243-250 (2008) · Zbl 1139.65040 · doi:10.1016/j.cam.2007.05.008 [11] Gautschi, W.: Numerical analysis: an introduction, (1997) [12] Bi, W.; Ren, H.; Wu, Q.: New family of seventh-order methods for nonlinear equations, Appl. math. Comput. 203, 408-412 (2008) · Zbl 1154.65323 · doi:10.1016/j.amc.2008.04.048 [13] Xu, L.; Wang, X.: Topics on methods and examples of mathematical analysis (in chinese), (1983) [14] Kung, H. T.; Traub, J. F.: Optimal order of one-point and multipoint iteration, J. assoc. Comput. math. 21, 634-651 (1974) · Zbl 0289.65023 · doi:10.1145/321850.321860 [15] Weerakoon, S.; Fernando, T. G. I.: A variant of Newton’s method with accelerated third-order convergence, Appl. math. Lett. 13, 87-93 (2000) · Zbl 0973.65037 · doi:10.1016/S0893-9659(00)00100-2
|
2014-04-25 03:46:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5623323917388916, "perplexity": 7552.0157281152615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.sshade.eu/data/publication/PUBLI_Mahjoub_2012
|
Publication
Names
• A. Mahjoub
• N. Carrasco
• P.-R. Dahoo
• T. Gautier
• C. Szopa
• G. Cernogora
Title
Influence of methane concentration on the optical indices of Titan’s aerosols analogues
Abstract
This work deals with the optical constant characterization of Titan aerosol analogues or “tholins” produced with the PAMPRE experimental setup and deposited as thin films onto a silicon substrate. Tholins were produced in different N2–CH4 gaseous mixtures to study the effect of the initial methane concentration on their optical constants. The real (n) and imaginary (k) parts of the complex refractive index were determined using the spectroscopic ellipsometry technique in the 370–1000 nm wavelength range. We found that optical constants depend strongly on the methane concentrations of the gas phase in which tholins are produced: imaginary optical index (k) decreases with initial CH4 concentration from $2.3 × 10^{−2}$ down to $2.7 × 10^{−3}$ at 1000 nm wavelength, while the real optical index (n) increases from 1.48 up to 1.58 at 1000 nm wavelength. The larger absorption in the visible range of tholins produced at lower methane percentage is explained by an increase of the secondary and primary amines signature in the mid-IR absorption. Comparison with results of other tholins and data from Titan observations are presented. We found an agreement between our values obtained with 10% methane concentration, and Imanaka et al. values, in spite of the difference in the analytical method. This confirms a reliability of the optical properties of tholins prepared with various setups but with similar plasma conditions. Our comparison with Titan’s observations also raises a possible inconsistency between the mid-IR aerosol signature by VIMS and CIRS Cassini instruments and the visible Huygens-DISR derived data. The mid-IR VIMS and CIRS signatures are in agreement with an aerosol dominated by an aliphatic carbon content, whereas the important visible absorption derived from the DISR measurement seems to be incompatible with such an important aliphatic content, but more compatible with an amine-rich aerosol.
Keywords
Titan, Photometry, Radiative transfer, Titan, atmosphere, tholins, optical constants spectra, UV, visible
Content
instrument-technique, sample, spectral data, planetary sciences
Year
2012
Journal
Icarus
Volume
221
Number
2
Pages
670 - 677
Pages number
8
Document type
article
Publication state
published
|
2023-03-26 02:34:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4634290933609009, "perplexity": 4961.903171220929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00563.warc.gz"}
|
http://www.researchgate.net/publication/8034745_Rabi_Oscillations_Revival_Induced_by_Time_Reversal_A_Test_of_Mesoscopic_Quantum_Coherence
|
Article
# Rabi Oscillations Revival Induced by Time Reversal: A Test of Mesoscopic Quantum Coherence
[more]
Collège de France, Lutetia Parisorum, Île-de-France, France
(Impact Factor: 7.73). 02/2005; 94(1):010401. DOI: 10.1103/PhysRevLett.94.010401
Source: PubMed
ABSTRACT Using an echo technique proposed by Morigi et al. [Phys. Rev. A 65, 040102 (2002)], we have time-reversed the atom-field interaction in a cavity quantum electrodynamics experiment. The collapse of the atomic Rabi oscillation in a coherent field is reversed, resulting in an induced revival signal. The amplitude of this "echo" is sensitive to nonunitary decoherence processes. Its observation demonstrates the existence of a mesoscopic quantum superposition of field states in the cavity between the collapse and the revival times.
### Full-text
Available from: A. Auffèves, May 12, 2014
0 Followers
·
180 Views
• Source
##### Article: Nondestructive Rydberg Atom Counting with Mesoscopic Fields in a Cavity
[Hide abstract]
ABSTRACT: We present an efficient, state-selective, nondemolition atom-counting procedure based on the dispersive interaction of a sample of circular Rydberg atoms with a mesoscopic field contained in a high-quality superconducting cavity. The state-dependent atomic index of refraction, proportional to the atom number, shifts the classical field phase. A homodyne procedure translates the information from the phase to the intensity. The final field intensity is readout by a mesoscopic atomic sample. This method opens promising routes for quantum information processing and nonclassical state generation with Rydberg atoms.
Physical Review Letters 04/2005; 94(11):113601. DOI:10.1103/PhysRevLett.94.113601 · 7.73 Impact Factor
• Source
##### Article: Quantum random walk of the field in an externally driven cavity
[Hide abstract]
ABSTRACT: Using resonant interaction between atoms and the field in a high quality cavity, we show how to realize quantum random walks as proposed by Aharonov et al [Phys. Rev. A {\bf48}, 1687 (1993)]. The atoms are driven strongly by a classical field. Under conditions of strong driving we could realize an effective interaction of the form $iS^{x}(a-a^{\dag})$ in terms of the spin operator associated with the two level atom and the field operators. This effective interaction generates displacement in the field's wavefunction depending on the state of the two level atom. Measurements of the state of the two level atom would then generate effective state of the field. Using a homodyne technique, the state of the quantum random walker can be monitored. Comment: 6-page 4-fig. submitted Phy. Rev A
Physical Review A 04/2005; 72(3). DOI:10.1103/PhysRevA.72.033815 · 2.99 Impact Factor
• Source
##### Article: Probing a quantum field in a photon box
[Hide abstract]
ABSTRACT: Einstein often performed thought experiments with 'photon boxes', storing fields for unlimited times. This is yet but a dream. We can nevertheless store quantum microwave fields in superconducting cavities for billions of periods. Using circular Rydberg atoms, it is possible to probe in a very detailed way the quantum state of these trapped fields. Cavity quantum electrodynamics tools can be used for a direct determination of the Husimi Q and Wigner quasi-probability distributions. They provide a very direct insight into the classical or non-classical nature of the field.
Journal of Physics B Atomic Molecular and Optical Physics 04/2005; 38(9):S535. DOI:10.1088/0953-4075/38/9/006 · 1.92 Impact Factor
|
2015-07-07 17:36:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6742724180221558, "perplexity": 2508.2906876486118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099758.82/warc/CC-MAIN-20150627031819-00138-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://ec.gateoverflow.in/2347/gate-ece-2006-question-61
|
4 views
A zero- mean white Gaussian noise is passed through an ideal lowpass filter of ban width $10 \; \mathrm{kHz}$. The output is the uniformly sampled with sampling period $t=0.03 \; \mathrm{msec}$.
The samples so obtained would be
1. correlated
2. statistically independent
3. uncorrelated
4. orthogonal
|
2022-10-01 18:31:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9234573841094971, "perplexity": 4792.339908148741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00735.warc.gz"}
|
http://data8.org/datascience/util.html
|
# Utility Functions (datascience.util)¶
Utility functions
datascience.util.make_array(*elements)[source]
Returns an array containing all the arguments passed to this function. A simple way to make an array with a few elements.
As with any array, all arguments should have the same type.
>>> make_array(0)
array([0])
>>> make_array(2, 3, 4)
array([2, 3, 4])
>>> make_array("foo", "bar")
array(['foo', 'bar'],
dtype='<U3')
>>> make_array()
array([], dtype=float64)
datascience.util.percentile(p, arr=None)[source]
Returns the pth percentile of the input array (the value that is at least as great as p% of the values in the array).
If arr is not provided, percentile returns itself curried with p
>>> percentile(74.9, [1, 3, 5, 9])
5
>>> percentile(75, [1, 3, 5, 9])
5
>>> percentile(75.1, [1, 3, 5, 9])
9
>>> f = percentile(75)
>>> f([1, 3, 5, 9])
5
datascience.util.plot_cdf_area(rbound=None, lbound=None, mean=0, sd=1)
Plots a normal curve with specified parameters and area below curve shaded between lbound and rbound.
Args:
rbound (numeric): right boundary of shaded region
lbound (numeric): left boundary of shaded region; by default is negative infinity
mean (numeric): mean/expectation of normal distribution
sd (numeric): standard deviation of normal distribution
datascience.util.plot_normal_cdf(rbound=None, lbound=None, mean=0, sd=1)[source]
Plots a normal curve with specified parameters and area below curve shaded between lbound and rbound.
Args:
rbound (numeric): right boundary of shaded region
lbound (numeric): left boundary of shaded region; by default is negative infinity
mean (numeric): mean/expectation of normal distribution
sd (numeric): standard deviation of normal distribution
datascience.util.table_apply(table, func, subset=None)[source]
Applies a function to each column and returns a Table.
Uses pandas apply under the hood, then converts back to a Table
Args:
table : instance of Table
The table to apply your function to
func : function
Any function that will work with DataFrame.apply
subset : list | None
A list of columns to apply the function to. If None, function will be applied to all columns in table
tab : instance of Table
A table with the given function applied. It will either be the shape == shape(table), or shape (1, table.shape[1])
datascience.util.proportions_from_distribution(table, label, sample_size, column_name='Random Sample')[source]
Adds a column named column_name containing the proportions of a random draw using the distribution in label.
This method uses np.random.multinomial to draw sample_size samples from the distribution in table.column(label), then divides by sample_size to create the resulting column of proportions.
Args:
table: An instance of Table.
label: Label of column in table. This column must contain a
distribution (the values must sum to 1).
sample_size: The size of the sample to draw from the distribution.
column_name: The name of the new column that contains the sampled
proportions. Defaults to 'Random Sample'.
Returns:
A copy of table with a column column_name containing the sampled proportions. The proportions will sum to 1.
Throws:
ValueError: If the label is not in the table, or if
table.column(label) does not sum to 1.
datascience.util.sample_proportions(sample_size, probabilities)[source]
Return the proportion of random draws for each outcome in a distribution.
This function is similar to np.random.multinomial, but returns proportions instead of counts.
Args:
sample_size: The size of the sample to draw from the distribution.
probabilities: An array of probabilities that forms a distribution.
Returns:
An array with the same length as probability that sums to 1.
datascience.util.minimize(f, start=None, smooth=False, log=None, array=False, **vargs)[source]
Minimize a function f of one or more arguments.
Args:
f: A function that takes numbers and returns a number
start: A starting value or list of starting values
smooth: Whether to assume that f is smooth and use first-order info
log: Logging function called on the result of optimization (e.g. print)
vargs: Other named arguments passed to scipy.optimize.minimize
Returns either:
1. the minimizing argument of a one-argument function
2. an array of minimizing arguments of a multi-argument function
|
2019-02-19 08:51:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34285393357276917, "perplexity": 5071.267156356685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489729.11/warc/CC-MAIN-20190219081639-20190219103639-00550.warc.gz"}
|
https://codereview.stackexchange.com/questions/97955/using-enum-to-apply-consumer/98148
|
Using Enum to apply Consumer
I have a utility called ZipFileCombiner that combines zip archives into a single archive. Part of the implementation is using an Enum called CollisionStrategy to deal with entry collisions. It's a nested Enum. It seems kind of ugly to me. It feels like embedding that much code within the Enum seems wrong. I could create yet another inner class (or outer) to hold it, but that seems wrong as well since logic is tightly coupled to the idea of a strategy. I'm looking for suggestions as well as a general peer review.
Full code here in GitHub. (IOUtil is also in that project.)
CollisionStrategy
public enum CollisionStrategy {
FAIL((bean, combiner) -> {
throw new IllegalStateException("Collision detected. Entry " + bean.entry() +
" exists. Current source: " + bean.source());
}),
USE_FIRST((bean, combiner) -> {}),
RENAME_AND_ADD((bean, combiner) -> {
if(bean.entry().isDirectory()){
return;
}
String[] sourceNameParts = bean.source().getName().split("/");
String sourceName = sourceNameParts[sourceNameParts.length-1];
String[] entryNameParts = bean.entry().getName().split("/");
String entryName = entryNameParts[entryNameParts.length-1];
entryNameParts[entryNameParts.length-1] = sourceName + entryName;
String newEntryName = Arrays.asList(entryNameParts).stream().collect(Collectors.joining("/"));
// Have to manually "clone" the entry. This sucks.
ZipEntry entry = new ZipEntry(newEntryName);
entry.setTime(bean.entry().getTime());
entry.setComment(bean.entry().getComment());
entry.setCompressedSize(bean.entry().getCompressedSize());
entry.setCrc(bean.entry().getCrc());
entry.setCreationTime(bean.entry().getCreationTime());
entry.setMethod(bean.entry().getMethod());
entry.setExtra(bean.entry().getExtra());
entry.setLastAccessTime(bean.entry().getLastAccessTime());
entry.setLastModifiedTime(bean.entry().getLastModifiedTime());
try (InputStream in = bean.source().getInputStream(bean.entry())){
combiner.copyEntryFromSourceToTarget(in,entry,bean.zipOutputStream());
} catch (IOException e) {
throw new IllegalStateException(e);
}
});
BiConsumer<CombinerBean,ZipFileCombiner> biConsumer;
CollisionStrategy(BiConsumer<CombinerBean,ZipFileCombiner> consumer){
biConsumer = consumer;
}
void apply(CombinerBean bean, ZipFileCombiner combiner){
biConsumer.accept(bean,combiner);
}
}
CombinerBean
/*
CombinerBean is used in the CollisionStrategy. It only serves to clean up code such that long arg lists
aren't necessary.
*/
private static class CombinerBean{
private final ZipEntry e;
private final ZipFile src;
private final ZipOutputStream out;
public CombinerBean(ZipOutputStream zo, ZipEntry ze, ZipFile source){
e = ze; src = source; out = zo;
}
public ZipEntry entry(){ return e; }
public ZipFile source(){ return src; }
public ZipOutputStream zipOutputStream(){ return out; }
}
Method addEntryContent that refers to the strategy
private void addEntryContent(final ZipOutputStream out, final ZipFile source, final ZipEntry entry, final Set<String> entryNames) throws IOException {
// Assuming duplicate directory entries across archives are OK, so skip if directory
if(!entry.isDirectory()) {
LOGGER.warning(entry.getName() + " has already been added. Applying strategy: " + strategy);
strategy.apply(new CombinerBean(out, entry, source), this);
}
}else {
try (InputStream in = source.getInputStream(entry)) {
copyEntryFromSourceToTarget(in, entry, out);
}
}
}
Method copyEntryFromSourceToTarget which is called by the strategy
private void copyEntryFromSourceToTarget(final InputStream in, final ZipEntry targetEntry, final ZipOutputStream out) throws IOException {
out.putNextEntry(targetEntry);
out.closeEntry();
}
• Not an answer but I usually keep enum's in all capital letters or some other way to easily pick out. – Evan Carslake Jul 24 '15 at 15:45
• An alternative to wrapping a lambda is to make the enum CollisionStrategy implements BiConsumer<CombinerBean,ZipFileCombiner> – Peter Lawrey Jul 25 '15 at 5:52
• It's best practices to let statements like if, for, ... be followed by a space to disinguish them from method invocations.
• CollisionStrategy
I'd use:
String.format("Collision detected. Entry %s exists. Current source: %s",
bean.entry(), bean.source());
instead of string concatenation:
"Collision detected. Entry " + bean.entry() +
" exists. Current source: " + bean.source()
The same applies to addEntryContent().
• CombinerBean
Java best practices is "one statement per line" since your code is a lot more read (possibly by others, too) than it is written.
• addEntryContent() and copyEntryFromSourceToTarget()
You break lines at ~104 in CollisionStrategy but you dont do it in these methods. That's not consistent style.
• Thanks for the feedback. I agree with your points in general, but I don't apply blanket rules to my code. If a certain style for a block of code seems easier to read, even if it breaks conventions, then use it. I understand it's a subjective call, so maybe others don't feel the same. – MadConan Jul 27 '15 at 13:07
• @MadConan What are you referring to in detail? – GeroldBroser reinstates Monica Jul 27 '15 at 13:10
• CombinerBean, one statement per line. For the simplistic, one-line-code-blocks in a small inner class, I personally prefer seeing it all together at a glance as opposed to scrolling back and forth. – MadConan Jul 27 '15 at 13:13
• The String.format` suggestion is a good one, though. I need to start using that more. Old habits die hard :) – MadConan Jul 27 '15 at 13:16
• @MadConan With 8 more lines you have to scroll? Are you developing on a smartphone in landscape view? :-) – GeroldBroser reinstates Monica Jul 27 '15 at 13:19
|
2019-12-12 20:15:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3150188624858856, "perplexity": 11990.162669309291}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540545146.75/warc/CC-MAIN-20191212181310-20191212205310-00459.warc.gz"}
|
https://www.tutorialspoint.com/How-to-represent-text-that-must-be-isolated-from-its-surrounding-for-bidirectional-text-formatting-in-HTML
|
# How to represent text that must be isolated from its surrounding for bidirectional text formatting in HTML?
HTMLWeb DevelopmentFront End Technology
#### Kickstart HTML, CSS and PHP: Build a Responsive Website
Featured
59 Lectures 8.5 hours
#### Web Design for Beginners: Build Websites in HTML & CSS 2022
68 Lectures 8 hours
#### Complete HTML/CSS Course 2022
Featured
47 Lectures 12.5 hours
The <bdi> HTML element instructs the bidirectional algorithm of the browser to treat the text it contains independently of the content around it. It's very helpful when a website dynamically adds some text without knowing which direction it should go.
For example, few languages like Arabic, Urdu or Hebrew are written in the right – to – left direction instead of the usual left – to – right. We use the <bdi></bdi> tag before and after the text that goes in the opposite direction of the script to rearrange it.
However, if there is confusion about the text direction or we do not know which direction the text goes in, apply the dir = auto tag to any markup wrapping the text location. But if there is no markup present, wrap the text with the bdi tag again.
## Syntax
<bdi> text </bdi>
Following are the examples…
## Example
In the following example we are using bdi element to represent a span of text to get isolated from its surroundings. Here, we are wrapping one line in Arabic using the bdi tag and another line with the span markup using the dir = auto tag.
<!DOCTYPE html>
<html>
<body>
<h1>World wrestling championships</h1>
<ul>
<li><bdi class="name">Akshay</bdi>: 1st place</li>
<li><bdi class="name">Balu</bdi>: 2nd place</li>
<li><span class="name">Mani</span>: 3rd place</li>
<li><bdi class="name">الرجل القوي إيان</bdi>: 4th place</li>
<li><span class="name" dir="auto">تیز سمی</span>: 5th place</li>
</ul>
</body>
</html>
## Output
On executing the above script, the output window opens up consisting of different wrestling champions added with <bdi> along with their position to make our text in the right way.
Updated on 05-Sep-2022 12:26:38
|
2022-09-25 11:16:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5866864919662476, "perplexity": 5634.065771794826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00340.warc.gz"}
|
https://planetmath.org/harnacksprinciple
|
# Harnack’s principle
If the functions$u_{1}(z)$, $u_{2}(z)$, … are harmonic (http://planetmath.org/HarmonicFunction) in the domain $G\subseteq\mathbb{C}$ and
$u_{1}(z)\leq u_{2}(z)\leq\cdots$
in every point of $G$, then $\lim_{n\to\infty}u_{n}(z)$ either is infinite in every point of the domain or it is finite in every point of the domain, in both cases uniformly (http://planetmath.org/UniformConvergence) in each closed (http://planetmath.org/ClosedSet) subdomain of $G$. In the latter case, the function $u(z)=\lim_{n\to\infty}u_{n}(z)$ is harmonic in the domain $G$ (cf. limit function of sequence).
Title Harnack’s principle HarnacksPrinciple 2013-03-22 14:57:35 2013-03-22 14:57:35 Mathprof (13753) Mathprof (13753) 14 Mathprof (13753) Theorem msc 30F15 msc 31A05
|
2019-02-20 14:08:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 9, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8866229057312012, "perplexity": 7560.3293880045385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495001.53/warc/CC-MAIN-20190220125916-20190220151916-00116.warc.gz"}
|
https://studyqueries.com/preferir-conjugation/
|
# How To Use Conjugation Of Preferir Verb (Present, Past & Future Tense) In Spanish?
Conjugation Of Preferir Verb in Spanish, Basically, the verb preferir means to prefer. These two words are cognates, which means they have a similar sound and that they have the same meaning since they both come from the same Latin verb. Consequently, they are used in the same context. If you want to express that you would rather you do something or that you would prefer something in English, you can use the verb preferir in Spanish.
When talking about preferring to do something, the verb preferir in Spanish is followed by an infinitive verb, as in Prefiero comer en casa (I prefer to eat at home). As a matter of fact, preferir can also be followed by a noun when we are talking about preferring one thing over another. For example, you can say Prefiero el frío que el calor (I prefer the cold over the heat).
## Conjugation Of Preferir (Present, Past & Future Tense) In Spanish
In this case, the conjugation of prefer is irregular due to the fact that it is a stem-changing verb. The result is that whenever the second e in the stem of the verb appears in a stressed syllable, the second e changes into another vowel. If you are using Preferir, you need to pay extra attention to it because sometimes it changes from ie to just i. As an example, the first person present tense conjugation of preferir is prefiero, while the third person preterite conjugation of preferir is prefierió.
This article includes conjugations of preferir in the indicative mood (present, past, conditional, future), in the subjunctive mood (both present and past), in the imperative mood, plus other verb forms.
How To Use Partir Verb Conjugation (Preterite, Subjuntive) In French?
### Conjugation of Preferir In the Present Indicative
Notice that the stem change from e to ie occurs in all the conjugations except for Nosotros and Vosotros in the present indicative tense.
$$\mathbf{\color{red}{Yo:-}}$$ prefiero
Example In Spanish: Yo prefiero estudiar sola.
Translation In English: I prefer to study alone.
$$\mathbf{\color{red}{Tú:-}}$$prefieres
Example In Spanish: Tú prefieres el frío que el calor.
Translation In English: You prefer the cold over the heat.
$$\mathbf{\color{red}{Ella / Él / Usted:-}}$$prefiere
Example In Spanish: Ella prefiere viajar en bus.
Translation In English: She prefers to travel by bus.
$$\mathbf{\color{red}{Nosotros:-}}$$preferimos
Example In Spanish: Nosotros preferimos la comida china.
Translation In English: We prefer Chinese food.
$$\mathbf{\color{red}{Vosotros:-}}$$preferís
Example In Spanish: Vosotros preferís el instructor anterior.
Translation In English: You prefer the previous instructor.
$$\mathbf{\color{red}{Ustedes/ellos/ellas:-}}$$prefieren
Example In Spanish: Ellos prefieren no salir de noche.
Translation In English: They prefer not to go out at night.
### Conjugation of Preferir In the Preterite Indicative
When it comes to Spanish, there are two forms of past tense: the preterite and the imperfect. A preterite is used to talk about things that have already taken place in the past. If you notice, the stem changes from e to i (not ie) in the third person conjugations (él/ella/usted, ellos/ellas/ustedes) of the preterite tense.
$$\mathbf{\color{red}{Yo:-}}$$ preferí
Example In Spanish: Yo preferí estudiar sola.
Translation In English: I preferred to study alone.
$$\mathbf{\color{red}{Tú:-}}$$preferiste
Example In Spanish: Tú preferiste el frío que el calor.
Translation In English: You preferred the cold over the heat.
$$\mathbf{\color{red}{Ella / Él / Usted:-}}$$prefirió
Example In Spanish: Ella prefirió viajar en bus.
Translation In English: She preferred to travel by bus.
$$\mathbf{\color{red}{Nosotros:-}}$$preferimos
Example In Spanish: Nosotros preferimos la comida china.
Translation In English: We preferred Chinese food.
$$\mathbf{\color{red}{Vosotros:-}}$$preferisteis
Example In Spanish: Vosotros preferisteis el instructor anterior.
Translation In English: You preferred the previous instructor.
$$\mathbf{\color{red}{Ustedes/ellos/ellas:-}}$$prefirieron
Example In Spanish: Ellos prefirieron no salir de noche.
Translation In English: They preferred not to go out at night.
### Conjugation of Preferir In the Imperfect Indicative
A second form of the past tense in Spanish is the imperfect tense, which is used to describe ongoing or repeated actions from the past. To prefer in the imperfect is usually translated to English as “used to prefer.” Also, keep in mind that there are no stem changes in the imperfect tense.
$$\mathbf{\color{red}{Yo:-}}$$ prefería
Example In Spanish: Yo prefería estudiar sola.
Translation In English: I used to prefer to study alone.
$$\mathbf{\color{red}{Tú:-}}$$preferías
Example In Spanish: Tú preferías el frío que el calor.
Translation In English: You used to prefer the cold over the heat.
$$\mathbf{\color{red}{Ella / Él / Usted:-}}$$prefería
Example In Spanish: Ella prefería viajar en bus.
Translation In English: She used to prefer to travel by bus
$$\mathbf{\color{red}{Nosotros:-}}$$preferíamos
Example In Spanish: Nosotros preferíamos la comida china.
Translation In English: We used to prefer Chinese food.
$$\mathbf{\color{red}{Vosotros:-}}$$preferíais
Example In Spanish: Vosotros preferíais el instructor anterior.
Translation In English: You used to prefer the previous instructor.
$$\mathbf{\color{red}{Ustedes/ellos/ellas:-}}$$preferían
Example In Spanish: Ellos preferían no salir de noche.
Translation In English: They used to prefer not to go out at night.
### Conjugation of Preferir In the Future Indicative
You can conjugate the future tense by starting with the infinitive form and adding the endings of the future tense.
$$\mathbf{\color{red}{Yo:-}}$$ preferiré
Example In Spanish: Yo preferiré estudiar sola.
Translation In English: I will prefer to study alone.
$$\mathbf{\color{red}{Tú:-}}$$preferirás
Example In Spanish: Tú preferirás el frío que el calor.
Translation In English: You will prefer the cold over the heat.
$$\mathbf{\color{red}{Ella / Él / Usted:-}}$$preferirá
Example In Spanish: Ella preferirá viajar en bus.
Translation In English: She will prefer to travel by bus.
$$\mathbf{\color{red}{Nosotros:-}}$$preferiremos
Example In Spanish: Nosotros preferiremos la comida china.
Translation In English: We will prefer Chinese food.
$$\mathbf{\color{red}{Vosotros:-}}$$preferiréis
Example In Spanish: Vosotros preferiréis el instructor anterior.
Translation In English: You will prefer the previous instructor.
$$\mathbf{\color{red}{Ustedes/ellos/ellas:-}}$$preferirán
Example In Spanish: Ellos preferirán no salir de noche.
Translation In English: They will prefer not to go out at night.
### Conjugation of Preferir In the Periphrastic Future Indicative
$$\mathbf{\color{red}{Yo:-}}$$ voy a preferir
Example In Spanish: Yo voy a preferir estudiar sola.
Translation In English: I am going to prefer to study alone.
$$\mathbf{\color{red}{Tú:-}}$$vas a preferir
Example In Spanish: Tú vas a preferir el frío que el calor.
Translation In English: You are going to prefer the cold over the heat.
$$\mathbf{\color{red}{Ella / Él / Usted:-}}$$va a preferir
Example In Spanish: Ella va a preferir viajar en bus.
Translation In English: She is going to prefer to travel by bus.
$$\mathbf{\color{red}{Nosotros:-}}$$vamos a preferir
Example In Spanish: Nosotros vamos a preferir la comida china.
Translation In English: We are going to prefer Chinese food.
$$\mathbf{\color{red}{Vosotros:-}}$$vais a preferir
Example In Spanish: Vosotros vais a preferir el instructor anterior.
Translation In English: You are going to prefer the previous instructor.
$$\mathbf{\color{red}{Ustedes/ellos/ellas:-}}$$van a preferir
Example In Spanish: Ellos van a preferir no salir de noche.
Translation In English: They are going to prefer not to go out at night.
Tener- “to be”: Spanish Verb Conjugation In Present, Past, Future Tenses
### Conjugation of Preferir In the Present Progressive/Gerund Form
The progressive tenses of the verb are usually formed with the verb estar followed by the present participle or gerund, prefiriendo. It is interesting to note that in the gerund, the stem changes from e to i (and not to ie). However, the verb preferir is not typically used in the present progressive form, since the verb normally does not refer to continuing action.
$$\mathbf{\color{red}{Present\ Progressive\ of\ Preferir:-}}$$está prefiriendo
Example In Spanish: Ella está prefiriendo viajar en bus.
Translation In English: She is preferring to travel by bus.
### Conjugation of Preferir In the Past Participle
For the conjugation of perfect tenses, for example, the present perfect, you need the auxiliary verb Haber and a past participle, preferido.
$$\mathbf{\color{red}{Present\ Perfect\ of\ Preferir:-}}$$ha preferido
Example In Spanish: Ella ha preferido viajar en bus.
Translation In English: She has preferred to travel by bus.
### Conjugation of Preferir In the Conditional Indicative
As with the future tense, the conditional tense is conjugated by starting with the infinitive form and adding conditional endings.
$$\mathbf{\color{red}{Yo:-}}$$ preferiría
Example In Spanish: Yo preferiría estudiar sola.
Translation In English: I would prefer to study alone.
$$\mathbf{\color{red}{Tú:-}}$$preferirías
Example In Spanish: Tú preferirías el frío que el calor.
Translation In English: You would prefer the cold over the heat.
$$\mathbf{\color{red}{Ella / Él / Usted:-}}$$preferiría
Example In Spanish: Ella preferiría viajar en bus.
Translation In English: She would prefer to travel by bus.
$$\mathbf{\color{red}{Nosotros:-}}$$preferiríamos
Example In Spanish: Nosotros preferiríamos la comida china.
Translation In English: We would prefer Chinese food.
$$\mathbf{\color{red}{Vosotros:-}}$$preferiríais
Example In Spanish: Vosotros preferiríais el instructor anterior.
Translation In English: You would prefer the previous instructor.
$$\mathbf{\color{red}{Ustedes/ellos/ellas:-}}$$preferirían
Example In Spanish: Ellos preferirían no salir de noche.
Translation In English: They would prefer not to go out at night.
### Conjugation of Preferir In the Present Subjunctive
Unlike the present indicative tense, the stem of the present subjunctive changes from e to ie in all conjugations except Nosotros and Vosotros, just like the stem of the present indicative. Nevertheless, Nosotros and Vosotros both have a stem change as well, but it is simply that the letter e is changed to the letter i.
$$\mathbf{\color{red}{Que\ Yo:-}}$$ prefiera
Example In Spanish: El profesor recomienda que yo prefiera estudiar sola.
Translation In English: The professor recommends that I prefer to study alone.
$$\mathbf{\color{red}{Que\ Tú:-}}$$prefieras
Example In Spanish: Tu padre espera que tú prefieras el frío que el calor.
Translation In English: Your father hopes that you prefer the cold over the heat.
$$\mathbf{\color{red}{Que\ Ella / Él / Usted:-}}$$prefiera
Example In Spanish: El conductor espera que ella prefiera viajar en bus.
Translation In English: The driver hopes that she prefers to travel by bus.
$$\mathbf{\color{red}{Que\ nosotros:-}}$$prefiramos
Example In Spanish: Nuestros abuelos esperan que nosotros prefiramos la comida china.
Translation In English: Our grandparents hope that we prefer Chinese food.
$$\mathbf{\color{red}{Que\ vosotros:-}}$$prefiráis
Example In Spanish: Perla espera que vosotros prefiráis el instructor anterior.
Translation In English: Perla hopes that you prefer the previous instructor.
$$\mathbf{\color{red}{Que\ Ustedes/ellos/ellas:-}}$$prefieran
Example In Spanish: Sus padres esperan que ustedes prefieran no salir de noche.
Translation In English: Your parents hope that you prefer not to go out at night.
How To Use Divertir And Divertirse Conjugation In Spanish?
### Conjugation of Preferir In the Imperfect Subjunctive
The imperfect subjunctive can be conjugated in two different ways. Both of these options have the stem changed from e to i.
Type: 1
$$\mathbf{\color{red}{Que\ Yo:-}}$$:-}}\) prefiriera
Example In Spanish: El profesor recomendaba que yo prefiriera estudiar sola.
Translation In English: The professor recommended that I prefer to study alone.
$$\mathbf{\color{red}{Que\ Tú:-}}$$prefirieras
Example In Spanish: Tu padre esperaba que tú prefirieras el frío que el calor.
Translation In English: Your father hoped that you prefer the cold over the heat.
$$\mathbf{\color{red}{Que\ Ella / Él / Usted:-}}$$prefiriera
Example In Spanish: El conductor esperaba que ella prefiriera viajar en bus.
Translation In English: The driver hoped that she prefers to travel by bus.
$$\mathbf{\color{red}{Que\ nosotros:-}}$$prefiriéramos
Example In Spanish: Nuestros abuelos esperaban que nosotros prefiriéramos la comida china.
Translation In English: Our grandparents hoped that we prefer Chinese food.
$$\mathbf{\color{red}{Que\ vosotros:-}}$$prefirierais
Example In Spanish: Perla esperaba que vosotros prefirierais el instructor anterior.
Translation In English: Perla hoped that you prefer the previous instructor.
$$\mathbf{\color{red}{Que\ Ustedes/ellos/ellas:-}}$$prefirieran
Example In Spanish: Sus padres esperaban que ustedes prefirieran no salir de noche.
Translation In English: Your parents hoped that you prefer not to go out at night.
Type: 2
$$\mathbf{\color{red}{Que\ Yo:-}}$$:-}}\) prefiriese
Example In Spanish: El profesor recomendaba que yo prefiriese estudiar sola.
Translation In English: The professor recommended that I prefer to study alone.
$$\mathbf{\color{red}{Que\ Tú:-}}$$prefirieses
Example In Spanish: Tu padre esperaba que tú prefirieses el frío que el calor.
Translation In English: Your father hoped that you prefer the cold over the heat.
$$\mathbf{\color{red}{Que\ Ella / Él / Usted:-}}$$prefiriese
Example In Spanish: El conductor esperaba que ella prefiriese viajar en bus.
Translation In English: The driver hoped that she prefers to travel by bus.
$$\mathbf{\color{red}{Que\ nosotros:-}}$$prefiriésemos
Example In Spanish: Nuestros abuelos esperaban que nosotros prefiriésemos la comida china.
Translation In English: Our grandparents hoped that we prefer Chinese food.
$$\mathbf{\color{red}{Que\ vosotros:-}}$$prefirieseis
Example In Spanish: Perla esperaba que vosotros prefirieseis el instructor anterior.
Translation In English: Perla hoped that you prefer the previous instructor.
$$\mathbf{\color{red}{Que\ Ustedes/ellos/ellas:-}}$$prefiriesen
Example In Spanish: Sus padres esperaban que ustedes prefiriesen no salir de noche.
Translation In English: Your parents hoped that you prefer not to go out at night.
A Brief Details about Oral English Course and Why to Pursue It?
### Conjugation of Preferir In the Imperative
In order to give orders or commands, you need to be in the imperative mood. For the verb preferir, however, the commands might sound somewhat awkward, since it is not common for a person to be commanded to prefer something. It is also worth noting that all of the commands have the stem changed from e to either ie or i.
#### Positive Commands
$$\mathbf{\color{red}{Tú:-}}$$prefiere
Example In Spanish: ¡Prefiere el frío que el calor!
Translation In English: Prefer the cold over the heat!
$$\mathbf{\color{red}{Usted:-}}$$prefiera
Example In Spanish: ¡Prefiera viajar en bus!
Translation In English: Prefer to travel by bus!
$$\mathbf{\color{red}{Nosotros:-}}$$prefiramos
Example In Spanish: ¡Prefiramos la comida china!
Translation In English: Let’s prefer Chinese food!
$$\mathbf{\color{red}{Vosotros:-}}$$preferid
Example In Spanish: ¡Preferid al instructor anterior!
Translation In English: Prefer the previous instructor!
$$\mathbf{\color{red}{Ustedes:-}}$$prefieran
Example In Spanish: ¡Prefieran no salir de noche!
Translation In English: Prefer not to go out a night!
#### Negative Commands
$$\mathbf{\color{red}{Tú:-}}$$no prefieras
Example In Spanish: ¡No prefieras el frío que el calor!
Translation In English: Don’t prefer the cold over the heat!
$$\mathbf{\color{red}{Usted:-}}$$no prefiera
Example In Spanish: ¡No prefiera viajar en bus!
Translation In English: Don’t prefer to travel by bus!
$$\mathbf{\color{red}{Nosotros:-}}$$no prefiramos
Example In Spanish: ¡No prefiramos la comida china!
Translation In English: Let’s not prefer Chinese food!
$$\mathbf{\color{red}{Vosotros:-}}$$no prefiráis
Example In Spanish: ¡No prefiráis al instructor anterior!
Translation In English: Don’t prefer the previous instructor!
$$\mathbf{\color{red}{Ustedes:-}}$$no prefieran
Example In Spanish: ¡No prefieran no salir de noche!
Translation In English: Don’t prefer not to go out a night!
## FAQs
What is the El form of Preferir?
Usted/él/ella
Is Preferir irregular in preterite?
Preferir is an irregular verb, so pay attention to the spelling. We need to use the stem prefir- in the third-person singular and plural. For the rest of the forms, we use the regular stem (prefer-).
Is Preferir an e to stem-changing ie verb?
The Spanish verb preferir (to prefer) is semi-regular since the stem changes in some forms of El Presente, but it has regular ‑ir group endings.
How do you use Preferir in a sentence?
Example In Spanish: Yo prefiero las camisas negras.
Translation In English: I prefer black shirts.
Example In Spanish: Ella prefiere venir los lunes.
Translation In English: She prefers coming on Mondays.
Example In Spanish: Yo preferiría no salir hoy.
Translation In English: I’d rather not go out today.
Example In Spanish: Yo preferiría quedarme en hotel que alquilar un apartamento.
Translation In English: I’d rather stay in a hotel than rent an apartment.
Canada Provinces: Demographics, Education, Attractions & Economy
|
2022-07-07 13:22:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44080835580825806, "perplexity": 13145.834624109824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00377.warc.gz"}
|
https://cs.stackexchange.com/questions/108615/what-does-tildeo-pn-alpha-mean
|
# What does $\tilde{O}_P(N^\alpha)$ mean?
What does $$\tilde{O}_P(N^\alpha)$$ mean? It appears in an estimation error mention in this paper, in the second paragraph on page 3.
What does big O subscript P mean in a probability context?
Pulled from: http://www2.math.uu.se/~svante/papers/sjN6.pdf (definition D5).
A probabilistic version of $$\mathcal{O}$$ that is frequently used is the following:
$$X_n = O_\text{P}(\alpha_n)$$ if for every $$\epsilon > 0$$ there exists constants $$C_{\epsilon}$$ and $$n_{\epsilon}$$ such that $$\mathbb{P}(|X_n| \leq C_{\epsilon}\alpha_n) > 1 − \epsilon$$ for every $$n \geq n_{\epsilon}$$. In other words, $$X_n/\alpha_n$$ is bounded, up to an exceptional event of arbitrarily small (but fixed) positive probability. This is also known as $$X_n/\alpha_n$$ being bounded in probability.
So combining this and D.W.'s answer we get:
$$X_n \in \tilde{O}_{\text{P}}(\alpha_n)$$ means for every $$\epsilon > 0$$ there exists constants $$C_\epsilon$$ and $$n_{\epsilon}$$ such that $$\mathbb{P}(|X_n| \leq C_{\epsilon}\alpha_n n^{\gamma}) > 1 - \epsilon$$ for every $$n \geq n_{\epsilon}$$ and for all $$\gamma > 0$$.
An looser terms $$X_n \in \tilde{O}_{\text{P}}(\alpha_n)$$ means $$X_n$$ is bounded by $$\alpha_n$$ within a polynomial factor with high probability.
• I think this is a clear and correct answer, thanks! – luw Apr 27 '19 at 22:29
• @luw cool, I also combined this info with $\tilde{O}$ from D.W.'s answer to make it more complete. – ryan Apr 27 '19 at 22:34
It depends on context; often, saying $$f(n) \in \tilde{O}(g(n))$$ means $$f(n) \in O(g(n) n^{\epsilon})$$ for all $$\epsilon>0$$. For example, $$n^2 \log n \in \tilde{O}(n^2)$$.
• Any idea what the subscript $P$ could mean? – Discrete lizard Apr 27 '19 at 17:50
• @Discretelizard, no clue. – D.W. Apr 27 '19 at 18:13
• @Discretelizard could it possibly mean Polynomial? Then you have $f(n) \in \tilde{O}_P(g(n))$ means $f(n) \in O(g(n) \cdot \text{poly}(n))$ ? – ryan Apr 27 '19 at 19:30
• @ryan I doubt it, just the tilde is already widely used to ignore polynomial factors. Judging from the context in the paper, I suspect it has something to do with probability. – Discrete lizard Apr 27 '19 at 19:34
|
2021-01-26 02:58:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8421213030815125, "perplexity": 353.388642531659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704795033.65/warc/CC-MAIN-20210126011645-20210126041645-00069.warc.gz"}
|
https://math.stackexchange.com/questions/1000586/r-commutative-ring-having-a-non-zero-nilpotent-then-r-times-subsetneq-r
|
# $R$ commutative ring having a non-zero nilpotent, then $R^{\times}\subsetneq (R[X])^{\times}$
Let $R$ be a commutative ring. Also there is $a\in R$, $a\ne 0$, such that $a^n=0$. Then $R^{\times}\subsetneq(R[X])^{\times}$, so there is an element in $(R[X])^{\times}$ which is not contained in $R^{\times}$
I am struggling in finding such an element in $(R[X])^{\times}$ which is not contained in $R^{\times}$.
Any tips? Thanks :)
Consider $1+ax$. More generally see Chapter 1 Exercises 1-2 of Introduction to Commutative Algebra by Atiyah and Macdonald .
• It is a little bit easier to consider $1-ax$. – Martin Brandenburg Nov 1 '14 at 10:46
• When I first did this problem, I used the difference of squares rule instead of the identity in the other answer. You can also do it by inducting on the degree of nilpotence as $(1+ax)(1-ax)=1-a^2x$ and $a^2$ is strictly less nilpotent then $a$ as long as $a\ne 0$. – PVAL-inactive Nov 1 '14 at 18:34
EDIT: If you just want a hint only read the next two lines.
Consider the general identity
$$A^n - B^n = (A-B)(A^{n-1}+ A ^{n-2}B + ...+AB^{n-2}+B^{n-1})$$
which holds in any commutative algebra. Now let $A=1$ and $B=aX$, where $a^n = 0$ in $R$. Since $(aX)^n = a^n X^n = 0$, we have
$$1 = 1-0 = 1^n - (aX)^n = (1-aX)(1+aX+a^2X^2+...+a^{n-1}X^{n-1}).$$
This shows $1-aX \in R[X]^\times$, so $R[X]^\times$ has elements that are not in $R^\times$.
|
2021-07-30 02:28:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391719102859497, "perplexity": 82.7837652625121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153899.14/warc/CC-MAIN-20210729234313-20210730024313-00509.warc.gz"}
|
https://love2d.org/forums/viewtopic.php?f=3&t=82019&start=10
|
## Using coroutines to create an RPG dialogue system
General discussion about LÖVE, Lua, game development, puns, and unicorns.
zorg
Party member
Posts: 2483
Joined: Thu Dec 13, 2012 2:55 pm
Location: Absurdistan, Hungary
Contact:
### Re: Using coroutines to create an RPG dialogue system
airstruck wrote:They are the same as what other languages (ES6, Python, PHP, etc.) usually call generators.
Technically, generators like in python are a tad more limited than lua's coroutines. Also, luaJIT alleviates the "call across C boundary" issue a bit, but not fully.
That said, it's true that it's a hassle, so if one doesn't want to, they can make do without them.
Me and my stuff True Neutral Aspirant. Why, yes, i do indeed enjoy sarcastically correcting others when they make the most blatant of spelling mistakes. No bullying or trolling the innocent tho.
pgimeno
Party member
Posts: 1441
Joined: Sun Oct 18, 2015 2:58 pm
### Re: Using coroutines to create an RPG dialogue system
airstruck wrote:It's strange seeing that error message in this situation, I'd expect "attempt to yield from outside a coroutine" instead. That's what you'd normally get from executing coroutine.yield outside of a coroutine (try it in the REPL). Does Love wrap everything in a coroutine for some reason?
Well, this file causes the "yield across C-call boundary" both with LÖVE and luajit:
Code: Select all
coroutine.yield()
Thrust II Reloaded - GifLoad for Löve - GSpöt GUI - My NotABug.org repositories - portland (mobile orientation)
The MS-Github repositories I had have been closed after the acquisition announcement and will be removed in the near future.
airstruck
Party member
Posts: 650
Joined: Thu Jun 04, 2015 7:11 pm
Location: Not being time thief.
### Re: Using coroutines to create an RPG dialogue system
Ah, well that explains it. It throws "attempt to yield from outside a coroutine" in PUC Lua, I assumed it would do the same in LuaJIT but should have checked. I just figured it was something screwy with Love, naturally
Inny
Party member
Posts: 652
Joined: Fri Jan 30, 2009 3:41 am
Location: New York
### Re: Using coroutines to create an RPG dialogue system
If people are having trouble with coroutines (and believe me, I love coroutines), then there's a pretty normal way to run a menu system with just closures/continuations. I'll try to demonstrate with code since that always speaks louder.
Code: Select all
-- assuming our gamestate's basic functions, init, update, draw, etc., I'll show minimal stubs
function MyGameState:update()
self.runner = self.runner() -- Note here, always continuing with the last returned function
end
function MyGameState:init()
self.runner = self:runMenu() -- where it first comes from
end
function MyGameState:runMenu()
local menu = MyMethodForBuildingMenus()
:withChaining()
:andOtherFeatures()
:finalizer()
-- add to your entities here possibly
local function loop()
if love.keyboard:isDown("space") then
local command = whateverEntitySystem:getOption(menu)
if command == "submenu" then
-- remove menu from entities
return self:runSubMenu()
end
else
-- call your entity system for updating generic menus
end
return loop
end
return loop
end
function MyGameState:runSubMenu()
-- follows same structure as runMenu
end
In this example, the runMenu method uses a closure to capture all of the local variables it needs, manage the entities list, do the menu specific code, etc. What's really happening is the nested loop function is behaving like a while loop would in a coroutine, passing itself up to the update function as the next place to resume.
But again, I love me some coroutines, but clearly the C yield-boundary is a limitation and it's always important as a programmer to know the limitations of your favorite features so you don't try to use them where they don't belong.
airstruck
Party member
Posts: 650
Joined: Thu Jun 04, 2015 7:11 pm
Location: Not being time thief.
### Re: Using coroutines to create an RPG dialogue system
Inny wrote:there's a pretty normal way to run a menu system with just closures/continuations
Interesting approach. I'd like to see an example of that being used in a in a game, do you know of one? It's a little hard for me to tell from your example which parts are meant to be repeated for different menus and which parts are meant to be reusable. I suspect it could be simplified a lot by leveraging some kind of promise-like pattern, but maybe I'm misreading it.
Inny
Party member
Posts: 652
Joined: Fri Jan 30, 2009 3:41 am
Location: New York
### Re: Using coroutines to create an RPG dialogue system
airstruck wrote:Interesting approach. I'd like to see an example of that being used in a in a game, do you know of one? It's a little hard for me to tell from your example which parts are meant to be repeated for different menus and which parts are meant to be reusable. I suspect it could be simplified a lot by leveraging some kind of promise-like pattern, but maybe I'm misreading it.
To expand on it a bit, here's some incomplete snippets from my personal one-hour-a-week-for-fun project
Code: Select all
function states.action_menu:run_action_menu()
local W, H = 18, 6
local sw, sh = graphics.max_window_size()
local main_menu = systems.menubox.assembly.builder.new()
:at(math.floor((sw-W)/2), sh-H+1):size(W, H)
:option("Talk", "talk")
:option("Item", "use")
:option("Equip", "equip")
:option("Search", "search")
:option("Attack", "attack", math.floor(W/2)+1, 1)
:option("Magic", "mag")
:option("Status", "stats")
:option("System", "system")
:build()
local menu_close_then = menu_open(self.entities, main_menu)
local function loop()
if input.tap.cancel then
menu_close_then(gamestate.pop)
elseif input.tap.action then
local command = main_menu.menu_config[main_menu.menu_selection].id
if command == "talk" then
if self.callback then self.callback() end
elseif command == "stats" then
return menu_close_then(self.run_stats_window, self)
end
else
local cmd = systems.menubox.update(self.entities)
if cmd == 'advance' then
graphics.set_dirty()
end
end
return loop
end
return loop
end
function states.action_menu:run_stats_window()
local sw, sh = graphics.max_window_size()
local W, H = 20, 12
local stats_window = systems.drawboxes.assembly.new_window_builder()
:at(math.floor((sw-W)/2), math.floor((sh-H)/2)):size(W, H)
:text(("Level %7i"):format(99), 1, 1)
:text(("Experience %7i"):format(99999), 1, 2)
:text(("Next Level %7i"):format(9999), 1, 3)
:text("-", 1, 4)
:text(("Hit Points %3i/%3i"):format(999, 999), 1, 5)
:text("", 1, 6)
:text("-", 1, 7)
:text("-", 1, 8)
:text("-", 1, 9)
:text(("Carry Weight %2i/%2i"):format(99, 99), 1, 10)
:build()
local menu_close_then = menu_open(self.entities, stats_window)
local function loop()
if input.tap.cancel then
return menu_close_then(self.run_action_menu, self)
end
return loop
end
return loop
end
I haven't had the chance to pare this down to its bare minimums yet, but it works for me.
airstruck
Party member
Posts: 650
Joined: Thu Jun 04, 2015 7:11 pm
Location: Not being time thief.
### Re: Using coroutines to create an RPG dialogue system
Inny wrote:here's some incomplete snippets
Thanks for sharing, there's just too many unknowns in there to really comment on it though (for example where does input.tap.action come from, what does menu_open do, what happens in init and update, etc.). I see what you're getting at, more or less, but I think there are cleaner ways to handle it. You mentioned that this is a "pretty normal way to run a menu system," I took that to mean that you'd seen this done someplace else, more than once. Is there an example of this being used in a full game that we could download and look at?
I'm not trying to knock this approach, by the way, just trying to figure out what its merits are and how it compares to other solutions.
Inny
Party member
Posts: 652
Joined: Fri Jan 30, 2009 3:41 am
Location: New York
### Re: Using coroutines to create an RPG dialogue system
Oh, sorry, I used "normal" to mean "made of normal parts", not like "everyone in the love2d world uses it." Actually this technique is probably more used in the javascript world and not so much within love/lua, e.g. you would build out some DOM and bind a bunch of callbacks to their event handlers.
airstruck
Party member
Posts: 650
Joined: Thu Jun 04, 2015 7:11 pm
Location: Not being time thief.
### Re: Using coroutines to create an RPG dialogue system
Inny wrote:Actually this technique is probably more used in the javascript world and not so much within love/lua
Fair enough, I've seen it there too. I think part of the reason promises and other generalized async patterns exist is to avoid that pattern, honestly (especially in JS). It can be hard to read and maintain in my experience. It's just a matter of preference, I guess, but I'd probably do something like this (assuming Chain function mentioned earlier exists, or you could do something similar using a promises implementation):
Code: Select all
function MyGameState:MenuChain (menu)
return Chain(
-- Push menu onto stack and slide it onto screen.
function (go)
self.menuStack[#self.menuStack + 1] = menu
-- A method that takes a callback, and starts a tween.
-- The callback will run when the tween is finished.
menu:slideIn(go)
end,
-- Await user input.
function (go)
-- A method that takes a callback and awaits menu input.
-- The callback will run when input is received.
-- A command (user action) object is passed to the callback.
menu:awaitInput(go)
end,
-- Handle user input.
function (go, command)
-- If user is closing this menu, go to next link in chain now.
if command.id == 'close' then
go()
return
end
-- If it's a submenu, return a new menu chain for it.
if command.id == 'submenu' then
return self:MenuChain(command.submenu)
end
-- Command's ID not recognized; it's not a menu-related command. Let it do its own thing.
return self:MenuCommandChain(command)
end,
-- Slide menu off screen.
function (go)
menu:slideOut(go)
end,
-- Pop menu off the stack.
function (go)
assert(self.menuStack[#self.menuStack] == menu)
self.menuStack[#self.menuStack] = nil
end
)
end
function MyGameState:MenuCommandChain (command)
return Chain(
function (go)
if not command.execute then error 'bad menu command' end
-- A method that takes a callback and starts executing a command.
-- The callback will run when the command is finished.
command:execute(go)
end
)
end
function MyGameState:update (dt)
local topMenu = self.menuStack[#self.menuStack]
if topMenu then
topMenu:update(dt)
end
end
function MyGameState:draw ()
for _, menu in ipairs(self.menuStack) do
menu:draw()
end
end
function MyGameState:init()
self.menuStack = {}
local menu = Menu(self)
:addItem { id = 'close', title = 'Close' }
:addItem { id = 'submenu', title = 'Crafting',
submenu = self:CraftingMenu() }
:addItem { id = 'submenu', title = 'Inventory',
submenu = self:InventoryMenu() }
:addItem {
title = 'Turn orange',
execute = function (cb)
orangeTween:start(cb)
end
}
:addItem {
title = 'Quit',
execute = function ()
os.exit()
end
}
self:MenuChain(menu)()
end
This example is also incomplete, of course. I'll probably be pushing some code to github soon that does something similar, will try to remember to post a link here.
### Who is online
Users browsing this forum: No registered users and 2 guests
|
2019-01-19 04:04:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19268791377544403, "perplexity": 9412.527016999542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662124.0/warc/CC-MAIN-20190119034320-20190119060320-00408.warc.gz"}
|
https://judge.u-aizu.ac.jp/onlinejudge/description.jsp?id=2169
|
Time Limit : sec, Memory Limit : KB
English
Problem E: Colored Octahedra
A young boy John is playing with eight triangular panels. These panels are all regular triangles of the same size, each painted in a single color; John is forming various octahedra with them.
While he enjoys his playing, his father is wondering how many octahedra can be made of these panels since he is a pseudo-mathematician. Your task is to help his father: write a program that reports the number of possible octahedra for given panels. Here, a pair of octahedra should be considered identical when they have the same combination of the colors allowing rotation.
Input
The input consists of multiple datasets. Each dataset has the following format:
Color1 Color2 ... Color8
Each Colori (1 ≤ i ≤ 8) is a string of up to 20 lowercase alphabets and represents the color of the i-th triangular panel.
The input ends with EOF.
Output
For each dataset, output the number of different octahedra that can be made of given panels.
Sample Input
blue blue blue blue blue blue blue blue
red blue blue blue blue blue blue blue
red red blue blue blue blue blue blue
Output for the Sample Input
1
1
3
|
2021-06-19 22:48:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5440213680267334, "perplexity": 2827.0208273985327}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00102.warc.gz"}
|
https://imathworks.com/tex/tex-latex-line-spacing-in-a-cell-of-table/
|
# [Tex/LaTex] Line-spacing in a cell of table
line-spacingtables
I'd like to change the line space of sentence in a cell of table.
It is not line-breaked, but over-flowed since the sentence is quite long in the cell.
My code example is below.
The line space is quite larger than cell space, which looks so bad.
Is there any way to make it narrow, only affecting in the cell?
ps. Since I use \arraystretch which make cell-line small, line-space in the cell looks more bigger than cell-space.
\documentclass[10pt]{article}
\usepackage{booktabs,ctable,threeparttable}
\begin{document}
\renewcommand{\arraystretch}{0.75}
\begin{table}[t]
\caption{Test}
\begin{threeparttable}
\begin{tabularx}{\textwidth}{c l X}
\FL
No. & Code & Name \ML[0.08em]
01 & 01XXX & Agriculture, forestry and fishing \NN
02 & 05XXX & Manufacture of Electronic Components, Computer,
Radio, Television and Communication Equipment and Apparatuses
\NN
03 & 06XXX & Nursing \LL
\end{tabularx}
\label{tab:test}
\end{threeparttable}
\end{table}
\end{document}
This is how my table look.
\arraystretch only affects line spacing between table rows. In an X column, you can change the spacing between lines inside a cell by adjusting \baselineskip. Below is the same example coded in two slightly different ways, the first using \ctable, the other using your tabularx set-up. Notice the \ctable gives better spacing for the caption relative to the table.
\documentclass[10pt]{article}
\usepackage{ctable}
\begin{document}
{\renewcommand{\arraystretch}{1.5}
\ctable[caption = Test,pos=htp,width=\textwidth]
{c l >{\setlength{\baselineskip}{1.5\baselineskip}}X}{}{
\FL
No. & Code & Name \ML[0.08em]
01 & 01XXX & Agriculture, forestry and fishing \NN
02 & 05XXX & Manufacture of Electronic Components, Computer,
Radio, Television and Communication Equipment and Apparatuses
\NN
03 & 06XXX & Nursing \LL
}}
\begin{table}[htp]
\renewcommand{\arraystretch}{1.5}
\caption{Test}
\begin{tabularx}{\textwidth}{c l >{\setlength{\baselineskip}{1.5\baselineskip}}X}
\FL
No. & Code & Name \ML[0.08em]
01 & 01XXX & Agriculture, forestry and fishing \NN
02 & 05XXX & Manufacture of Electronic Components, Computer,
Radio, Television and Communication Equipment and Apparatuses
\NN
03 & 06XXX & Nursing \LL
\end{tabularx}
\label{tab:test}
\end{table}
\end{document}
I demonstrated this with increasing the spacing, as this is more realistic. Squeezing will work too, just replace both 1.5 by 0.75 in the \arraystretch and the >{...}X, but it does not look too good. As Mico suggests, in such situations reducing font sizes is better. In both cases, note that the change to \arraystretch has been inside a group/environment, to prevent this affecting other places later in the document.
|
2023-03-29 09:06:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6917312145233154, "perplexity": 6540.411402317029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00129.warc.gz"}
|
https://www.ias.ac.in/listing/bibliography/pmsc/Vugar_E._Ismailov
|
• Vugar E Ismailov
Articles written in Proceedings – Mathematical Sciences
• On the Theorem of 𝑀 Golomb
Let $X_1,\ldots,X_n$ be compact spaces and $X=X_1\times\cdots\times X_n$. Consider the approximation of a function $f\in C(X)$ by sums $g_1(x_1)+\cdots+g_n(x_n)$, where $g_i\in C(X_i),i=1,\ldots,n$. In [8], Golomb obtained a formula for the error of this approximation in terms of measures constructed on special points of 𝑋, called `projection cycles’. However, his proof had a gap, which was pointed out by Marshall and O’Farrell [15]. But the question if the formula was correct, remained open. The purpose of the paper is to prove that Golomb’s formula holds in a stronger form.
• # Proceedings – Mathematical Sciences
Volume 132, 2022
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
2022-09-29 15:02:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202595949172974, "perplexity": 1594.3445712625353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00128.warc.gz"}
|
https://www.physicsforums.com/threads/show-that-the-sequence-converges.970445/
|
# Show that the sequence converges
Homework Statement:
Use the Monotone Convergence Theorem to show that the sequence converges.
Relevant Equations:
$$\left\{\frac{n!}{1\cdot3\cdot5\cdot\dots\cdot(2n+1)}\right\}^{+\infty}_{n=0}$$
So what I know about the Monotone Convergence Theorem is that it states that: if a sequence is bounded and monotone, then it is convergent. So all I have to show is that the sequence is bounded and monotone.
My attempt at showing that it is bounded:
The sequence can be expanded as:
$$={\frac 1 1}\cdot{\frac 2 3}\cdot{\frac 3 5}\cdot\dots\cdot{\frac {(n-1)} {(2n-1)}}\cdot{\frac n {(2n+1)}}$$
I'm not really sure how to show this mathematically, but I saw that on all the factors, the highest value it has obtained is ##1##, and that is from the first factor, ##\frac 1 1##. Every next factor has been ##<1##. So I think its upper bound is 1:
$${\frac 1 1}\cdot{\frac 2 3}\cdot{\frac 3 5}\cdot\dots\cdot{\frac {(n-1)} {(2n-1)}}\cdot{\frac n {(2n+1)}}\leq1$$
Next, we saw that ##n## starts at ##0##. And having ##a_0=1##. Since ##n\rightarrow+\infty## then the value of each factor can't become negative. So I think it is always postive e.g. ##>0##:
$$0\lt{\frac 1 1}\cdot{\frac 2 3}\cdot{\frac 3 5}\cdot\dots\cdot{\frac {(n-1)} {(2n-1)}}\cdot{\frac n {(2n+1)}}\leq1$$
So I guess this shows that it is bounded...?
Now I have to show that it is monotone:
A monotone sequence to my understanding is a sequence that is either increasing or decreasing. I think I can show this by equating:
$$\frac{a_{n+1}}{a_n}$$
to find out if the sequence is increasing or decreasing. So I do:
\begin{align} a_n&=\frac{n!}{1\cdot3\cdot5\cdot\dots\cdot(2n+1)} \\ a_{n+1}&=\frac{(n+1)(n!)}{1\cdot3\cdot5\cdot\dots\cdot(2n+1)\cdot(2n+3)} \\ \frac{a_{n+1}}{a_n}&=\frac {n+1}{2n+3} \end{align}
We can see here that for all ##n\geq0, \frac {n+1}{2n+3}\lt1##, so I think that proves that the sequence is decreasing.
Since it seems to satisfy the two conditions of the Monotone Convergence Theorem, I guess it proves that it is convergent?
I think I managed to solve it, but I wanted to post it here because I am really unsure of how I proved it, and if there's a better or simpler way of solving this.
member 587159
The main idea certainly is correct. I didn't check for details though. Note that once you have shown that the sequence is decreasing, it is automatically bounded above (by the first sequence term), so you could make your answer shorter.
As an interesting side question: do you have any idea what the limit could be?
Iyan Makusa
Note that once you have shown that the sequence is decreasing, it is automatically bounded above (by the first sequence term), so you could make your answer shorter.
Oh well that's definitely shorter! I'll keep that in mind.
As an interesting side question: do you have any idea what the limit could be?
Hmm, this is an interesting side question. I'll give it a try:
So I think I can isolate the entire sequence to just the last factor ##\frac{n}{2n+1}## and evaluate its limit:
\begin{align} \lim_{n\rightarrow\infty}\frac{n}{2n+1} \end{align}
Applying L'Hopital's rule because both the numerator and denominator evaluate to infinity:
\begin{align} \lim_{n\rightarrow\infty}\frac{n}{2n+1} = \lim_{n\rightarrow\infty}\frac1 2 = \frac 1 2 \end{align}
Did I do this correctly?
pasmith
Homework Helper
Oh well that's definitely shorter! I'll keep that in mind.
Hmm, this is an interesting side question. I'll give it a try:
So I think I can isolate the entire sequence to just the last factor ##\frac{n}{2n+1}## and evaluate its limit:
\begin{align} \lim_{n\rightarrow\infty}\frac{n}{2n+1} \end{align}
Applying L'Hopital's rule because both the numerator and denominator evaluate to infinity:
\begin{align} \lim_{n\rightarrow\infty}\frac{n}{2n+1} = \lim_{n\rightarrow\infty}\frac1 2 = \frac 1 2 \end{align}
Did I do this correctly?
You are ignoring all of the other factors.
You found that to obtain $a_{n+1}$ from $a_n$, you multiply by a factor $(n+1)/(2n+3)$ which is always strictly less than 1, and the sequence was therefore decreasing. But in fact $(n+1)/(2n+3) < \frac12$, so the value of each term in the sequence is slightly less than half the value of the previous term. What does that tell you about the limit?
member 587159
You are ignoring all of the other factors.
You found that to obtain $a_{n+1}$ from $a_n$, you multiply by a factor $(n+1)/(2n+3)$ which is always strictly less than 1, and the sequence was therefore decreasing. But in fact $(n+1)/(2n+3) < \frac12$, so the value of each term in the sequence is slightly less than half the value of the previous term. What does that tell you about the limit?
My bad, I was under the impression that I can ignore the other factors because only the "last" factor matters because...well it's the last, so that's gotta be where the limit will be. Guess that was wrong!
Since the next factors are always ##<\frac 1 2## of the previous term, then the values are decreasing, to I assume is 0?
member 587159
My bad, I was under the impression that I can ignore the other factors because only the "last" factor matters because...well it's the last, so that's gotta be where the limit will be. Guess that was wrong!
Since the next factors are always ##<\frac 1 2## of the previous term, then the values are decreasing, to I assume is 0?
Good intuition! Now, make it formal!
Hint:
##a_1 < a_0/2, a_2 < a_1/2 < a_0/2^2, \dots##
Continue this process. What estimate can you make for ##a_n##?
SammyS
Ray Vickson
Homework Helper
Dearly Missed
Oh well that's definitely shorter! I'll keep that in mind.
Hmm, this is an interesting side question. I'll give it a try:
So I think I can isolate the entire sequence to just the last factor ##\frac{n}{2n+1}## and evaluate its limit:
\begin{align} \lim_{n\rightarrow\infty}\frac{n}{2n+1} \end{align}
Applying L'Hopital's rule because both the numerator and denominator evaluate to infinity:
\begin{align} \lim_{n\rightarrow\infty}\frac{n}{2n+1} = \lim_{n\rightarrow\infty}\frac1 2 = \frac 1 2 \end{align}
Did I do this correctly?
I get something different. First, write
$$1 \cdot 3 \cdots (2n+1) =\frac{ 1 \cdot 2 \cdot 3 \cdot 4 \cdots (2n) \cdot (2n+1)}{2 \cdot 4 \cdots (2n)}\\ \hspace{3em} = \frac{(2n+1)!}{2^n n!}$$ so your ratio is
$$\text{ratio} = \frac{2^n (n!)^2}{(2n+1)!}$$ Now one can apply Stirling's formula ##k! \sim \sqrt{2 \pi k}\, k^k e^{-k}## for large ##k##.
LCKurtz
Homework Helper
Gold Member
Another approach (assuming you have studied infinite series, which maybe you haven't). You have shown$$\frac{a_{n+1}}{a_n}=\frac {n+1}{2n+3} \rightarrow \frac 1 2$$which shows the series ##\sum a_n## converges by the ratio test, which implies ##a_n \to 0##.
member 587159
StoneTemplePython
Gold Member
A slightly different finish (that is close to MathQEDs) is to note that we have a positive sequence that is monotone decreasing and bounded below by zero -- using this proves the limit exists. i.e. we know ##a_n \to c ## for some ##c \in [0,1]## but this also implies ##a_{n+1} \to c##
so the sequence is
##0 \leq a_{n+1} \leq \frac{1}{2}a_n ##
which implies
##0 \leq c\leq \frac{1}{2}c ##
which simplifies to get the limitting value.
member 587159
pasmith
Homework Helper
Alternatively, you can write $$0 < \frac{n!}{1 \cdot 3 \cdot \, \dots \, \cdot (2n+1)} = \prod_{k=1}^n \frac{k}{2k+1} < \frac1{2^n}.$$
|
2021-09-22 23:49:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9906129837036133, "perplexity": 611.4827776289101}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057403.84/warc/CC-MAIN-20210922223752-20210923013752-00664.warc.gz"}
|
https://physics.stackexchange.com/questions/428095/why-is-conservation-of-momentum-not-valid-here
|
# Why is conservation of momentum not valid here?
To explain my confusion, I would provide the following system:
The two masses $m$ and $M$, with $M\gg m$, are moving towards each other(as directed by the arrows) with a common constant speed $V_0$. There is no friction between any two surfaces and all collisions are perfectly elastic.
I take all velocities +ve towards right and I call the velocity of $m$ after collision $V$. As $M\gg m$, there will be negligible change in the velocity of $M$ after collision. Also, as the collision is elastic, the velocity with which the two masses approach each other must be equal to the velocity with which they get separated.
Therefore, \begin{align} -V_0-(V_0)&=-V_0-(V) \\V&=-3V_0 \end{align}
Now, this seems quite true to me.
But, when we apply conservation of momentum, \begin{align} mV_0-MV_0&=mV-MV_0 \\V&=V_0 \end{align}
So why is conservation of momentum not valid here?
• Possible duplicate of Conservation of momentum in an elastic collision – sammy gerbil Sep 11 '18 at 16:37
• @sammygerbil Sir, I do not think this is a duplicate of that question. My question asks why and how my working is not correct. The question you provided simply wanted an explanation of why a ball after hitting a rigid stationary wall rebounds with same speed. My question does not ask that. Though I know that this problem can be reduced to that problem if we look at things from the frame of reference of the big mass, but it can only be reduced to and the problems are different. – SeaDog Sep 11 '18 at 16:50
• @SeaDog Both questions ask why momentum is not conserved. Both questions make the assumption that because the mass of the larger object is so very much bigger than that of the smaller object then the change in velocity of the larger object (and therefore also its change of momentum) can be neglected. – sammy gerbil Sep 11 '18 at 16:57
• @sammygerbil There the wall was rigid and so, without any math and only clear conception, one can easily say that the change of momentum of the wall is zero – SeaDog Sep 11 '18 at 17:01
• @SeaDog That is exactly the same mistake which you have made. – sammy gerbil Sep 11 '18 at 17:03
As $M\gg m$, there will be negligible change in the velocity of $M$ after collision.
Yes, the change in the velocity of $M$ will be negligible, but what is conserved is not the velocity but the momentum and, since $M\gg m$, even a small change in the velocity of $M$ will translate in a relatively big change in its momentum.
• Coefficient of restitution $e=1$ for elastic collisions, whether or not masses are equal. So relative velocity of separation equals relative velocity of approach. This is a consequence of the conservation of kinetic energy and momentum. – sammy gerbil Sep 11 '18 at 16:31
|
2019-10-20 03:03:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7239819765090942, "perplexity": 209.63702823815922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00495.warc.gz"}
|
http://gmatclub.com/forum/is-x-y-x-y-123108-20.html?kudos=1
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 07 Oct 2015, 08:32
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Is |x-y|>|x|-|y|
Author Message
TAGS:
Intern
Joined: 22 Jan 2012
Posts: 3
Followers: 0
Kudos [?]: 0 [0], given: 3
Is |x-y|>|x|-|y| ? [#permalink] 07 May 2012, 08:08
OA is B. I faced this problem in the GMATPrep.
(1) y < x
if x=3 and y=2, left hand abs(x-y) = 1, and right hand abs(x) - abs(y) = 1...No.
But if x=3 and y=-2, left hand is 5 and right is 1...Yes.
INSUFFICIENT.
(2) xy < 0
Let's think the following two cases.
(a) x>0 and y<0
In this case abs(x-y) > abs(x), as in the second plug-in in the discussion of (1) above.
So abs(x-y) naturally is greater than abs(x) - abs(y) because abs(x) > abs(x)-abs(y)...Yes.
(b) x<0 and y>0
In this case abs(x-y) = abs(x)+abs(y). Since abs(x) + abs(y) > abs (x) - abs(y),
abs(x-y) > abs(x)-abs(y)...Yes.
SUFFICIENT.
Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes Veritas Prep GMAT Discount Codes
Intern
Joined: 27 Feb 2013
Posts: 22
Concentration: Marketing, Entrepreneurship
Followers: 0
Kudos [?]: 70 [0], given: 8
Re: Is |x - y | > |x | - |y | ? (1) y < x (2) xy < 0 [#permalink] 06 Mar 2013, 21:36
1) Taking statement (1), if y < x, then there can be two cases -
a) y is negative, it can lead to two sub cases --
(i) x is negative ==> as y < x so |y| > |x| ==> |x| - |y| will be < 0, and |x - y| > 0 ==> |x - y| > |x| - |y|
(ii) x is positive ==> |x - y| would be sum of absolute value of x and y, essentially |x| + |y| ==> |x - y| > |x| - |y|
Problem statement is true.
b) y is positive ==> x can only be positive ==> |x - y| = |x| - |y|
Problem statement is false.
Since we do not know, whether y is positive or negative we can not conclude from statement 1.
2) Taking statement (2), if xy < 0 ==> two sub cases
a) x < 0 and y > 0 ==> |x - y| = |x| + |y| which is greater than |x| - |y|
b) x > 0 and y < 0 ==> |x - y| = |x| + |y|, which is again greater than |x| - |y|
Statement (2) is sufficient enough to conclude the problem statement.
Director
Joined: 29 Nov 2012
Posts: 926
Followers: 12
Kudos [?]: 575 [0], given: 543
Re: Is |x-y|>|x|-|y| [#permalink] 08 Mar 2013, 04:56
So basically the explanation is that find the signs test in the following cases
++
- -
+ -
- +
since statement doesn't 1 doesn't help in any way its insufficient and statement 2 either both positive or both negative when we plug examples its never true so its sufficient?
_________________
Click +1 Kudos if my post helped...
Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/
GMAT Prep software What if scenarios gmat-prep-software-analysis-and-what-if-scenarios-146146.html
Intern
Joined: 29 Oct 2012
Posts: 7
Location: United States
Concentration: Finance, Technology
GMAT Date: 06-03-2013
GPA: 3.83
WE: Web Development (Computer Software)
Followers: 0
Kudos [?]: 6 [0], given: 0
Re: Is |x-y|>|x|-|y| [#permalink] 25 Mar 2013, 02:08
mmphf wrote:
Is |x-y|>|x|-|y| ?
(1) y < x
(2) xy < 0
How it is B? Did they mention that X and Y are integers? No right, the answer should be E. If they provide details about X and Y as integers then it will be B otherwise it will be E.
can anyone help me about the scenario whether we consider fractions or not in this case?
Scenario:
x=1/2, y=1/3 ==> |1/2-1/3|=1/6 and |1/2|-|1/3|=1/6
_________________
kancharana
Math Expert
Joined: 02 Sep 2009
Posts: 29776
Followers: 4896
Kudos [?]: 53434 [0], given: 8160
Re: Is |x-y|>|x|-|y| [#permalink] 25 Mar 2013, 03:56
Expert's post
kancharana wrote:
mmphf wrote:
Is |x-y|>|x|-|y| ?
(1) y < x
(2) xy < 0
How it is B? Did they mention that X and Y are integers? No right, the answer should be E. If they provide details about X and Y as integers then it will be B otherwise it will be E.
can anyone help me about the scenario whether we consider fractions or not in this case?
Scenario:
x=1/2, y=1/3 ==> |1/2-1/3|=1/6 and |1/2|-|1/3|=1/6
The point is that x = 1/2 and y = 1/3 do not satisfy xy < 0 (the second statement).
_________________
Intern
Joined: 29 Oct 2012
Posts: 7
Location: United States
Concentration: Finance, Technology
GMAT Date: 06-03-2013
GPA: 3.83
WE: Web Development (Computer Software)
Followers: 0
Kudos [?]: 6 [0], given: 0
Re: Is |x-y|>|x|-|y| [#permalink] 25 Mar 2013, 04:37
Bunuel wrote:
kancharana wrote:
mmphf wrote:
Is |x-y|>|x|-|y| ?
(1) y < x
(2) xy < 0
How it is B? Did they mention that X and Y are integers? No right, the answer should be E. If they provide details about X and Y as integers then it will be B otherwise it will be E.
can anyone help me about the scenario whether we consider fractions or not in this case?
Scenario:
x=1/2, y=1/3 ==> |1/2-1/3|=1/6 and |1/2|-|1/3|=1/6
The point is that x = 1/2 and y = 1/3 do not satisfy xy < 0 (the second statement).
Thank you... understood that.
_________________
kancharana
Intern
Joined: 01 Feb 2013
Posts: 10
Location: United States
Concentration: Finance, Technology
GPA: 3
WE: Analyst (Computer Software)
Followers: 0
Kudos [?]: 10 [0], given: 9
Re: Is |x-y|>|x|-|y| [#permalink] 25 Mar 2013, 10:19
kancharana wrote:
mmphf wrote:
Is |x-y|>|x|-|y| ?
(1) y < x
(2) xy < 0
How it is B? Did they mention that X and Y are integers? No right, the answer should be E. If they provide details about X and Y as integers then it will be B otherwise it will be E.
can anyone help me about the scenario whether we consider fractions or not in this case?
Scenario:
x=1/2, y=1/3 ==> |1/2-1/3|=1/6 and |1/2|-|1/3|=1/6
It's implied that it is integers on the GMAT? Is this perception by me correct or completely out of the blue?
_________________
Goal: 25 KUDOZ and higher scores for everyone!
Verbal Forum Moderator
Joined: 10 Oct 2012
Posts: 629
Followers: 65
Kudos [?]: 798 [0], given: 135
Re: Is |x-y|>|x|-|y| [#permalink] 25 Mar 2013, 11:39
Expert's post
kancharana wrote:
mmphf wrote:
Is |x-y|>|x|-|y| ?
(1) y < x
(2) xy < 0
How it is B? Did they mention that X and Y are integers? No right, the answer should be E. If they provide details about X and Y as integers then it will be B otherwise it will be E.
can anyone help me about the scenario whether we consider fractions or not in this case?
Scenario:
x=1/2, y=1/3 ==> |1/2-1/3|=1/6 and |1/2|-|1/3|=1/6
From F.S 1, we have that x>y. Thus |x-y| = x-y. Thus, we have to answer whether x-y>|x|-|y|.
or x-|x|>y-|y|. Now for x>0, and y>0, we have is 0>0 and hence a NO. Again, for x>0 and y<0, we have a YES. Insufficient.
For F.S 2, we know that x and y are of opposite signs. Thus, x and y being on the opposite sides of the number line w.r.t the origin, the term |x-y| will always be more than the difference of the absolute distance of x and y from origin.Sufficient.
B.
_________________
Last edited by mau5 on 05 Apr 2013, 04:02, edited 1 time in total.
Math Expert
Joined: 02 Sep 2009
Posts: 29776
Followers: 4896
Kudos [?]: 53434 [0], given: 8160
Re: Is |x-y|>|x|-|y| [#permalink] 26 Mar 2013, 01:06
Expert's post
tulsa wrote:
kancharana wrote:
mmphf wrote:
Is |x-y|>|x|-|y| ?
(1) y < x
(2) xy < 0
How it is B? Did they mention that X and Y are integers? No right, the answer should be E. If they provide details about X and Y as integers then it will be B otherwise it will be E.
can anyone help me about the scenario whether we consider fractions or not in this case?
Scenario:
x=1/2, y=1/3 ==> |1/2-1/3|=1/6 and |1/2|-|1/3|=1/6
It's implied that it is integers on the GMAT? Is this perception by me correct or completely out of the blue?
No, that's completely wrong, we cannot assume that x and y are integers, if this is not explicitly stated.
Generally, GMAT deals with only Real Numbers: Integers, Fractions and Irrational Numbers. So, if no limitations, then all we can say about a variable in a question that it's a real number.
For more check here: math-number-theory-88376.html
Hope it helps.
_________________
Intern
Joined: 05 Mar 2013
Posts: 15
GMAT Date: 04-20-2013
Followers: 0
Kudos [?]: 3 [0], given: 9
Re: Is |x-y|>|x|-|y| [#permalink] 04 Apr 2013, 11:18
Bunuel wrote:
mmphf wrote:
Is |x-y|>|x|-|y| ?
(1) y < x
(2) xy < 0
Is |x-y|>|x|-|y|?
Probably the best way to solve this problem is plug-in method. Though there are two properties worth to remember:
1. Always true: $$|x+y|\leq{|x|+|y|}$$, note that "=" sign holds for $$xy\geq{0}$$ (or simply when $$x$$ and $$y$$ have the same sign);
2. Always true: $$|x-y|\geq{|x|-|y|}$$, note that "=" sign holds for $$xy>{0}$$ (so when $$x$$ and $$y$$ have the same sign) and $$|x|>|y|$$ (simultaneously). (Our case)
So, the question basically asks whether we can exclude "=" scenario from the second property.
(1) y < x --> we can not determine the signs of $$x$$ and $$y$$. Not sufficient.
(2) xy < 0 --> "=" scenario is excluded from the second property, thus $$|x-y|>|x|-|y|$$. Sufficient.
(1) x>y
x=-2,y=-4 then 2>-2 --> yes
x=4,y=-2 then 6>2 --> yes
can't get a no, so sufficient
(2) xy<0
x=4,y=-2 then 6>2 --> yes
x=-2,y=4 then 6>-2 --> yes
can't get a no, so sufficient
ans: D
why is the answer B? is the question mis-written and the inequality sign should have >= or <=?
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 5964
Location: Pune, India
Followers: 1527
Kudos [?]: 8414 [0], given: 193
Re: Is |x-y|>|x|-|y| [#permalink] 05 Apr 2013, 01:55
Expert's post
margaretgmat wrote:
Bunuel wrote:
mmphf wrote:
Is |x-y|>|x|-|y| ?
(1) y < x
(2) xy < 0
Is |x-y|>|x|-|y|?
Probably the best way to solve this problem is plug-in method. Though there are two properties worth to remember:
1. Always true: $$|x+y|\leq{|x|+|y|}$$, note that "=" sign holds for $$xy\geq{0}$$ (or simply when $$x$$ and $$y$$ have the same sign);
2. Always true: $$|x-y|\geq{|x|-|y|}$$, note that "=" sign holds for $$xy>{0}$$ (so when $$x$$ and $$y$$ have the same sign) and $$|x|>|y|$$ (simultaneously). (Our case)
So, the question basically asks whether we can exclude "=" scenario from the second property.
(1) y < x --> we can not determine the signs of $$x$$ and $$y$$. Not sufficient.
(2) xy < 0 --> "=" scenario is excluded from the second property, thus $$|x-y|>|x|-|y|$$. Sufficient.
(1) x>y
x=-2,y=-4 then 2>-2 --> yes
x=4,y=-2 then 6>2 --> yes
can't get a no, so sufficient
(2) xy<0
x=4,y=-2 then 6>2 --> yes
x=-2,y=4 then 6>-2 --> yes
can't get a no, so sufficient
ans: D
why is the answer B? is the question mis-written and the inequality sign should have >= or <=?
What about the case x = 4, y = 2 in statement 1?
then we get 2 > 2 --> No
Hence statement 2 is not sufficient.
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Senior Manager Joined: 13 May 2013 Posts: 475 Followers: 1 Kudos [?]: 98 [0], given: 134 Re: Is |x-y|>|x|-|y| [#permalink] 26 Jun 2013, 07:17 Is |x-y|>|x|-|y| ? (1) y < x If y is less than x then (x-y) is going to be positive, however, we don't know if x and y are positive or negative: I. (x-y) > x -y ===> 0 > 0 II. (x-y) > -x -y ===> 2x > 0 III. (x-y) > -x +y ===> 2x > 2y IV. (x-y) > x +y ===> 0 > 2y The way I see it, is with case I.) 0>0 isn't true, II.) x must be some non-negative # that isn't zero, III.) x > y but we already know that, IV.) y must be some non-negative # that isn't zero. So we know that x is positive, y is negative and that x > y but we still can't get a single answer. All we know for sure is that y < x (x=-2, y=-4) |x-y|>|x|-|y| (x-y)>(-x)-(-y) x-y>-x+y 2x>2y x>y |-2-(-4)|>|-2|-|-4| |2|>|2|-|4| 2>-2 TRUE (x=2, y=-4) |x-y|>|x|-|y| (x-y)>x-(-y) x-y>x+y 0>2y |x-y|>|x|-|y| |2-(-4)|>|2|-|-4| 6>-2 TRUE (x=4, y=2) |x-y|>|x|-|y| (x-y)>(x)-(y) x-y>x-y 0>0 |x-y|>|x|-|y| |4-2|>|4|-|2| 2>2 FALSE (0>0 isn't possible, nor does it confirm y or x) NOT SUFFICIENT (2) xy < 0 So either x is less than zero or y is less than zero. x & y ≠ 0. There are two possible cases: (x is positive and y is negative) or (x is negative and y is positive) I. (x is positive and y is negative) |x-y|>|x|-|y| (x-y)>(x)-(-y) x-y>x+y 0>2y (which holds with the premise in the first case that y is negative) II. (x is negative and y is positive) |x-y|>|x|-|y| -(x-y)>(-x)-(y) -x+y>-x-y 2y>0 (which holds with the premise in the second case that y is positive) SUFFICIENT (B) (does that make sense?) Senior Manager Joined: 23 Jan 2013 Posts: 423 Schools: Cambridge'16 Followers: 2 Kudos [?]: 33 [0], given: 34 Re: Is |x-y|>|x|-|y| [#permalink] 06 Aug 2013, 08:28 Is |x-y|>|x|-|y| ? (1) y < x (2) xy < 0 what if like that (x-y)^2>x^2-y^2 so x^2-2xy+y^2>x^2-y^2 and x^2-2xy+y^2-x^2+y^2>0, and 2y^2-2xy>0 and 2y(y-x)>0 finally, y>0 and y-x>0 (y>x) Then, 1) y < x, not sufficient, because it negates only one final condition and y may be both positive and negative 2) xy < 0, sufficient, because confirms that when y>0, y>x when x is negative B write, if it is OK Senior Manager Joined: 10 Jul 2013 Posts: 343 Followers: 3 Kudos [?]: 209 [0], given: 102 Re: Is |x-y|>|x|-|y| [#permalink] 06 Aug 2013, 13:29 kancharana wrote: mmphf wrote: Is |x-y|>|x|-|y| ? (1) y < x (2) xy < 0 How it is B? Did they mention that X and Y are integers? No right, the answer should be E. If they provide details about X and Y as integers then it will be B otherwise it will be E. can anyone help me about the scenario whether we consider fractions or not in this case? Scenario: x=1/2, y=1/3 ==> |1/2-1/3|=1/6 and |1/2|-|1/3|=1/6 ....... st(1), use x=3 , y = 2 and then x=1 , y = -1 , we will have a double case. ----insufficient st(2), use x= -5 , y = 10 and then x=10 , y = -5, we will have a single solution and its yes |x-y|>|x|-|y| .so its sufficient. you can use fractions in st(2) maintaining one positive and the other negative. st(2) will provide the same. so Answer is (B) _________________ Asif vai..... Manager Status: Persevering Joined: 15 May 2013 Posts: 225 Location: India Concentration: Technology, Leadership GMAT Date: 08-02-2013 GPA: 3.7 WE: Consulting (Consulting) Followers: 1 Kudos [?]: 67 [0], given: 34 Re: Is |x-y|>|x|-|y| [#permalink] 24 Aug 2013, 05:11 kancharana wrote: mmphf wrote: Is |x-y|>|x|-|y| ? (1) y < x (2) xy < 0 How it is B? Did they mention that X and Y are integers? No right, the answer should be E. If they provide details about X and Y as integers then it will be B otherwise it will be E. can anyone help me about the scenario whether we consider fractions or not in this case? Scenario: x=1/2, y=1/3 ==> |1/2-1/3|=1/6 and |1/2|-|1/3|=1/6 It really does not matter; no one is saying that they are integers. The problem with your approach is that you considered invalid values for the fractions. According to b xy<0; so either x or y must be -ve. Now, lets put the valid values as x=1/2 and y=-1/3; In LHS we get |1/2+1/3|=5/6 and in RHS we get 1/6; therefore the inequality holds, hence statement b is sufficient. _________________ --It's one thing to get defeated, but another to accept it. Intern Joined: 06 Apr 2012 Posts: 10 Followers: 0 Kudos [?]: 1 [0], given: 34 Re: Inequlities [#permalink] 28 Jun 2014, 19:32 First of all we need to consider different cases to solve this problem. take option 1) y<x this option can be subdivided into two blocks...when both are x,y>0 and x>y. lets take x=2, y=1 lx-yl = l2 -1l = 1 Right hand side of the equation = lxl - lyl = l2l - l1l = 1....so equation is invalid. lets take another example when x= 1 and y = -1... lx-yl = l1 - (-1)l = 2 and Rgiht hand side = 0 which make our equation valid....hence we cannot conclude anything from this option. take option 2) xy<0 under this option there can be two cases....a) x>0 and y<0 (b) x<0 and y>0 lets take a) and use some values.... x=2 and y = -1... simplifing the equation we get...lx-yl = 3 where lxl - lyl = 1 it makes equation valid. now take b) x= -2 and y = 1...we get lx-yl = 3 and lxl - lyl = 1 its also satisfy our given equation. so this option is sufficent to answer the given question. OA (B) Intern Joined: 23 Sep 2014 Posts: 15 Followers: 0 Kudos [?]: 0 [0], given: 6 Re: Is |x-y|>|x|-|y| [#permalink] 17 Dec 2014, 04:45 I really never understand these questions in general about absolute values. What exactly is the difference between |x-y| and |x| - |y| ? Math Expert Joined: 02 Sep 2009 Posts: 29776 Followers: 4896 Kudos [?]: 53434 [0], given: 8160 Re: Is |x-y|>|x|-|y| [#permalink] 17 Dec 2014, 04:50 Expert's post JoostGrijsen wrote: I really never understand these questions in general about absolute values. What exactly is the difference between |x-y| and |x| - |y| ? One should go through basics and brush fundamentals first and only after that practice questions, especially hard ones. Theory on Abolute Values: math-absolute-value-modulus-86462.html Absolute value tips: absolute-value-tips-and-hints-175002.html DS Abolute Values Questions to practice: search.php?search_id=tag&tag_id=37 PS Abolute Values Questions to practice: search.php?search_id=tag&tag_id=58 Hard set on Abolute Values: inequality-and-absolute-value-questions-from-my-collection-86939.html _________________ Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 5964 Location: Pune, India Followers: 1527 Kudos [?]: 8414 [0], given: 193 Re: Is |x-y|>|x|-|y| [#permalink] 17 Dec 2014, 07:08 Expert's post JoostGrijsen wrote: I really never understand these questions in general about absolute values. What exactly is the difference between |x-y| and |x| - |y| ? Try to put in values for x and y (positive as well as negative) and that will help you see the difference between these expressions. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Intern
Joined: 01 Nov 2012
Posts: 2
Followers: 0
Kudos [?]: 0 [0], given: 12
Re: Is |x-y|>|x|-|y| [#permalink] 11 Jan 2015, 05:48
I have recently started with the prep thus looking for patterns to solve the Absolute Value questions.
Wanted to check if the below solution stands valid.
Square both the sides of the question stem
|x-y|^2>(|x|-|y|)^2 --- |x-y|^2 = (x-y)^2
x^2+y^2-2xy>x^2+y^2-2|x||y|
xy<|x||y|
From 1
y<x - In Sufficient
From 2
xy<0 (Either of them is negative i.e. x +ve y -ve or y +ve or x -ve)
Sufficient
Re: Is |x-y|>|x|-|y| [#permalink] 11 Jan 2015, 05:48
Go to page Previous 1 2 3 Next [ 43 posts ]
Similar topics Replies Last post
Similar
Topics:
2 Is xy + xy < xy ? 1 17 Jun 2014, 18:45
8 Is xy + xy < xy ? 7 02 Mar 2013, 12:03
3 Is xy > x/y? 9 02 Dec 2011, 03:36
4 Is xy > x/y? 6 25 Feb 2011, 07:15
6 Is |x-y| = ||x|-|y|| 19 05 Oct 2009, 08:52
Display posts from previous: Sort by
|
2015-10-07 16:32:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5897244215011597, "perplexity": 4645.503736494094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737867468.81/warc/CC-MAIN-20151001221747-00051-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://www.researching.cn/articles/OJ74bc161dd57d8125/html
|
Search by keywords or author
• Photonics Research
• Vol. 10, Issue 5, 1271 (2022)
Zhuohui Yang1, Zhengqing Ding1, Lin Liu1, Hancheng Zhong1, Sheng Cao1, Xinzhong Zhang1, Shizhe Lin1, Xiaoying Huang1, Huadi Deng1, Ying Yu1、*, and Siyuan Yu1、2
Author Affiliations
• 1State Key Laboratory of Optoelectronic Materials and Technologies, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510275, China
• 2e-mail: yusy@mail.sysu.edu.cn
• show less
Zhuohui Yang, Zhengqing Ding, Lin Liu, Hancheng Zhong, Sheng Cao, Xinzhong Zhang, Shizhe Lin, Xiaoying Huang, Huadi Deng, Ying Yu, Siyuan Yu. High-performance distributed feedback quantum dot lasers with laterally coupled dielectric gratings[J]. Photonics Research, 2022, 10(5): 1271 Copy Citation Text show less
Abstract
The combination of grating-based frequency-selective optical feedback mechanisms, such as distributed feedback (DFB) or distributed Bragg reflector (DBR) structures, with quantum dot (QD) gain materials is a main approach towards ultrahigh-performance semiconductor lasers for many key novel applications, as either stand-alone sources or on-chip sources in photonic integrated circuits. However, the fabrication of conventional buried Bragg grating structures on GaAs, GaAs/Si, GaSb, and other material platforms has been met with major material regrowth difficulties. We report a novel and universal approach of introducing laterally coupled dielectric Bragg gratings to semiconductor lasers that allows highly controllable, reliable, and strong coupling between the grating and the optical mode. We implement such a grating structure in a low-loss amorphous silicon material alongside GaAs lasers with InAs/GaAs QD gain layers. The resulting DFB laser arrays emit at pre-designed 0.8 THz local area network wavelength division multiplexing frequency intervals in the 1300 nm band with record performance parameters, including sidemode suppression ratios as high as 52.7 dB, continuous-wave output power of 26.6 mW (room temperature) and 6 mW (at 55°C), and ultralow relative intensity noise (RIN) of $<-165 dB/Hz$ (2.5–20 GHz). The devices are also capable of isolator-free operating under very high external reflection levels of up to $-12.3 dB$ while maintaining high spectral purity and ultralow RIN qualities. These results validate the novel laterally coupled dielectric grating as a technologically superior and potentially cost-effective approach for fabricating DFB and DBR lasers free of their semiconductor material constraints, which are thus universally applicable across different material platforms and wavelength bands.
1. INTRODUCTION
Embedding semiconductor lasers with Bragg gratings as wavelength-selective feedback mechanisms is a well-established approach to achieving high-quality single-frequency lasing. In conjunction with the distinctive properties of various compound semiconductor gain materials, distributed feedback (DFB) and distributed Bragg reflector (DBR) lasers are finding a wide range of applications in both classical and quantum domains, such as with InGaAs emitting in the near infrared for optical communication [1], with GaAs or GaAsP in the red spectral range [2] for atomic clocks [3], atom interferometry [4], and efficient optical pumping [5], with GaSb or InAs/AlSb in the middle to far infrared [6] for trace-gas sensing [7], and with nitride semiconductors in the green to ultraviolet for absorption spectroscopy [8] and high-density data storage [9].
For optical data interconnect applications, the 1310 nm band is of particular interest for low-cost wavelength division multiplexing (WDM) systems [10], 5G and 6G optical networks [11], as well as for LiDAR [12,13] and sensing [14,15]. Compared with conventional InGaAs/InGaAlAs quantum well (QW) materials emitting in the same wavelength range, self-assembled InAs/GaAs quantum dots (QDs) have achieved superior performance, including lower threshold currents [16] and higher-temperature stability [17] due to the zero-dimensional (0D) carrier confinement in QDs. A highly desirable feature of QD lasers is their ultralow relative intensity noise (RIN) [1820] originating from their very low linewidth enhancement factor $α$, which affords great advantages in analog transmission [such as radio over fiber (RoF)] and sensing [21]. The low $α$ also underpins their tolerance to high levels of external optical feedback [2224], making QD lasers very promising light sources for reliable, scalable, and isolator-free photonic integrated circuits (PICs) [25]. Furthermore, InAs/GaAs QD materials have also been proved to be highly tolerant to epitaxial defects [2628], yielding high-performance Fabry–Perot (FP)-type laser diodes epi-grown on silicon substrate [2830]. Similar advantages have also been observed in other QD laser systems such as InAs/InP [31,32].
For DFB lasers, the buried Bragg gratings, as first realized in InP-based 1550 nm lasers, are conventionally placed on top of the active layer and fabricated through a regrowth process after grating definition and etching. The proximity to the waveguide allows the grating to intercept the optical field with a high coupling coefficient $κ$. This conventional approach has also been attempted in other materials and wavelengths such as GaAs-based QD lasers. In 2011, Tanaka et al. demonstrated a 1293 nm InAs/GaAs QD DFB laser with a GaAs grating buried in a metal organic vapor phase epitaxy (MOVPE) regrown InGaP cladding, achieving a $κ$ of $40 cm−1$ and sidemode suppression ratio (SMSR) of 45 dB [33]. In 2020, Wan et al. demonstrated a 1310 nm QD DFB laser epitaxially on silicon by molecular beam epitaxy (MBE) regrowth. Using a GaAs grating buried in an $Al0.4Ga0.6As$ upper cladding, a $κ$ of $45 cm−1$ and an SMSR of $>50 dB$ were achieved [34].
However, the regrowth process represents poor productivity for many material systems. For GaAs-based laser devices, an InGaP upper cladding layer requires wafer transfer from MBE to MOCVD systems, while an AlGaAs upper cladding layer requires rigorous pretreatment before regrowth via an ultrahigh-vacuum MBE chamber. Harder still, the regrowth process for GaN(Sb)-based lasers suffers from lack of a contamination-free AlGaN(Sb) regrowth process or other latticed-matched cladding materials with sufficient bandgap and refractive index contrasts.
An alternative, regrowth-free approach uses Bragg gratings etched alongside a ridge waveguide to form a laterally coupled DFB (LC-DFB) laser. For InP-based laser structures, the gratings can be fabricated simultaneously during the waveguide etching process. Using an aluminum-containing stop-etch layer and a chemically selective recipe [35,36], the grating penetration depth can be precisely delimited to just above the active layer to achieve a precise $κ$ value. However, translating this approach to GaAs- or GaSb-based DFB laser structures has proven to be challenging due to the lack of a suitable selective etch-stop layer. Previously demonstrated GaAs [37], GaSb [38], or GaN [39] ridge waveguides with LC gratings fabricated using such a one-step reactive ion etching (RIE) approach suffer from the fact that, due to local chemical transportation and reaction rate variations caused by the etched ridge and the very narrow grating gaps, it is very difficult to control the etch depth at the foot of the ridge waveguide where the grating intercepts the optical mode. An undesirable feature known as “footing,” which is a gradual increase in etch depth away from the foot of the etched waveguide, and another feature known as “RIE-lag,” which is a decreased etch depth in narrow gaps, result in significant uncertainties in the grating $κ$ value.
To circumvent this problem, in a previous work, the authors chose to etch the lateral grating deeply through the active region so that the grating etch depth no longer affects $κ$ [40]. However, a deep-etched active waveguide suffers from increased surface recombination and optical scattering loss, and a waveguide with practical widths can support more than one transverse mode, with the unwanted high-order modes as favored lasing modes due to their higher $κ$ values [40]. For GaSb lasers, metal gratings deposited after waveguide etching [41] were also used. While providing strong optical coupling, metal gratings can introduce significant additional absorption loss in the laser cavity.
In this paper, we demonstrate a novel dielectric grating structure placed alongside single transverse mode ridge waveguides that have a precisely controlled trapezoid cross-sectional profile etched to a depth just above the active layer. Fabricated in an amorphous silicon ($α$-Si) layer deposited after the formation of the ridge waveguide, the grating corrugations, plasma-etched into the $α$-Si, are precisely stopped at an underlying etch stop layer of $Al2O3$ deposited after the waveguide etching and before the $α$-Si layer. A high-contrast grating (with a refractive index difference of $Δn∼2$) is formed between the $α$-Si corrugations and a subsequently deposited silicon dioxide ($SiO2$) cladding layer, producing an LC grating with significantly enhanced and precisely controllable coupling coefficient $κ$.
We implemented the novel structure on an InAs/GaAs QD gain material, producing LC-DFB laser arrays emitting across the 1300 nm band on a 0.8 THz local area network wavelength division multiplexing (LWDM) grid. The devices emit more than 26.6 mW of single-mode output power at room temperature and a typical SMSR greater than 52.7 dB. They also demonstrate ultralow RIN of $<−165 dB/Hz$ in the range of 2.5–20 GHz and isolator-free operation under external feedback levels of up to $−12.3 dB$ (5.9%). The output power, SMSR, and RIN values are the best of the reported values of InAs/GaAs QD LC-DFB lasers, as far as we are aware of. These superior performances of the devices validate the novel LC grating as an effective regrowth-free approach to grating-based laser fabrication. In addition to the elimination of regrowth, the deployment of the novel LC grating structure decouples its fabrication from specific laser materials, and therefore the scheme could serve as a universal alternative approach for high-performance semiconductor laser devices employing grating structures as optical feedback mechanisms.
2. DESIGN AND FABRICATION OF THE LC-DFB QD LASER
The InAs/GaAs QD laser structure, in a typical p-i-n configuration [Fig. 1(a)], was grown on 3-inch (1 inch = 2.54 cm) semi-insulating GaAs (001) substrates in a solid-source MBE chamber. First, a 500 nm Si-doped GaAs buffer layer and a 1.8 μm $Al0.4Ga0.6As$ cladding layer were grown, followed by an active region that contains a five-layer InAs/GaAs QD separated by 35 nm GaAs barriers. Each QD layer comprises 2.4 ML InAs covered with a 3.5 nm $In0.15Ga0.85As$ strain-reducing layer. Modulation p-doping with Be was implemented in a 6 nm GaAs layer located 10 nm beneath each QD layer to obtain a concentration of 20 acceptors per dot. Afterwards, 1.8 μm Be-doped $Al0.4Ga0.6As$ and 100 nm GaAs layers were grown as p-cladding and p-contact layers, respectively. The InAs/GaAs QD gain materials with a density of $5.5×1010 cm−2$ were achieved, as indicated by the $1 μm×1 μm$ atomic force microscope (AFM) inset of Fig. 1(b) from an uncapped QD sample grown on a GaAs substrate under the same QD growth conditions. Room-temperature photoluminescence (PL) emission peaking at 1308 nm was observed with a narrow full-width at half-maximum (FWHM) of 30.9 meV. Additionally, large quantized-energy separation of 80 meV between the ground-state and the first exited state effectively suppressed carrier overflow at high temperatures. Traditional multi-mode ridge waveguide FP lasers (20 μm ridge width) with cleaved facets were fabricated to characterize the properties of the QD materials. Light–current–voltage ($L–I–V$) characteristics of the fabricated laser with a length of 2000 μm and its temperature dependence under a continuous-wave (CW) condition are shown in Fig. 1(c). The threshold currents are as low as 80 mA ($200 A/cm2$) at 25°C. Reduced temperature sensitivity is achieved with the characteristic temperature $T0$ as high as 98 K in the range of 25°C–55°C and 53 K in the range of 65°C–115°C. Since the measurements were carried out under CW mode, the extracted $T0$ should be an underestimate of the true value due to junction heating.
Figure 1.Material properties of InAs/GaAs QD lasers. (a) Cross-sectional scanning electron microscope (SEM) image of layer stack of the epi-wafer. The inset is the transmission electron microscope (TEM) image of the five QD layers. (b) Photoluminescence spectrum of the QD active layers on GaAs. The inset shows the atomic force microscope (AFM) image of an uncapped QD layer. (c) Light–current–voltage ($L–I–V$) characteristics of the fabricated laser with a length of 2000 μm and its temperature dependence under continuous-wave (CW) condition ranging from 25°C to 115°C. The inset shows the natural logarithm of threshold current versus stage temperature. The dashed line represents linear fitting to the experimental data.
The as-grown wafers were then processed into LC-DFB laser arrays. As shown in Fig. 2(a), a waveguide width of 2.1 μm and a depth of 1.7 μm are used to support only the lowest-order transverse mode (TE00). The ridge waveguides were patterned using electron beam lithography (EBL) and etched using an optimized chlorine-based GaAs inductively coupled plasma reactive-ion etching (ICP-RIE) process, with a trapezoid cross section, sidewall slope of $θ=76°$, and near-zero footing [Fig. 2(b)]. The near-ideal trapezoid waveguide profile is key to a deterministic grating coupling coefficient $κ$ and low scattering loss, both very important for improved DFB laser performance. In addition, a small footing gives rise to a larger $κ$, which results from the increased evanescent field into the grating region [42].
Figure 2.(a) Schematic of the DFB laser structure, including the near-zero “footing” trapezoid waveguide and the $α$-Si gratings (not to scale); (b) cross-sectional SEM image of the trapezoid waveguide with $θ=76°$, with the $α$-Si and the ARP6200 photoresist layers also present; (c) SEM images of the etched $α$-Si gratings with a $λ/4$ phase shift in the middle; (d) microscope image of laser array.
A 10 nm $Al2O3$ passivation layer and a 150 nm thick $α$-Si ($n∼3.495$) layer were deposited by atomic layer deposition (ALD) and ICP-CVD, respectively, with both layers covering the entire sample surface terrain, including the sloped sidewalls. First-order gratings with a period $Λ$ in a range of 194.5–199.7 nm, grating duty cycle of 1:1, and extrusion of 8 μm from the waveguide foot were designed and patterned by an EBL resist (ARP6200) alongside the ridge. The grating duty cycle is defined as the ratio of the grating width to the groove width. A $λ/4$ phase shift was placed in the middle of the gratings to force lasing in the defect mode. LWDM-compatible laser arrays were achieved by adjusting the grating period $Λ$ of adjacent lasers on the same bar, with a change of $ΔΛ=0.727 nm$ resulting in a wavelength increment of 0.8 THz. The grating corrugations were etched through the $α$-Si using a fluoride-based RIE process. The SEM image of Fig. 2(c) reveals a high-quality LC grating with very sharp and smooth sidewalls. With near-zero footing and a naturally chemically selective etch recipe, this dielectric grating structure is less sensitive to processing variations and, thus, more manufacturable than a buried heterostructure DFB structure. Grating coupling coefficient $κ$ is calculated from the lateral electric field distribution and effective index of the fundamental transverse electric mode via coupled-mode theory [4244], detailed in Appendix A. For the designed devices, a first-order grating with a duty cycle of 1:1 produces a calculated $κ$ of $3.24 mm−1$.
The corrugations were subsequently covered with a 200 nm layer of $SiO2$ ($n∼1.46$) to form a high-contrast grating prior to contact window opening and Ti/Pt/Au p-type contact deposition [Fig. 2(d)]. An AuGe/Ni/Au n-type contact was deposited in the back after thinning the GaAs substrate down to 200 μm. After being cleaved into bars, the two facets were covered with a six-layer $SiO2/TiO2$ high-reflection (HR, 96.8%) coating and a one-pair $SiO2/TiO2$ anti-reflection (AR, 1.7%) coating to suppress the facet backreflections and increase output power. Finally, the laser arrays were mounted epitaxy-side-up on gold-coated copper heat sinks.
3. LASING CHARACTERISTICS OF THE FABRICATED LC-DFB QD LASER
At room temperature (25°C), a typical $2.1 μm×1.5 mm$ DFB laser device [Fig. 3(a)] has a turn-on voltage of 0.93 V and a differential series resistance of $10 Ω$. The measured CW threshold current of 18 mA corresponds to a current density of $571 A cm−2$. Above the threshold, the output power follows a near-linear curve with a slope efficiency of $0.2 W A−1$. The AR facet output power of 26.6 mW was obtained at an injection current of 150 mA or $∼8.3×Ith$. Figure 3(b) shows CW lasing up to 55°C with output power of $>6 mW$, and it is believed that the operation temperature can be further increased by more effective junction heat dissipation, by either further thinning the substrate or flip-chip bonding.
Figure 3.(a) Typical $L–I–V$ characteristics of a DFB laser with a $2.1 μm×1500 μm$ cavity at room temperature; (b) temperature-dependent $L–I$ curves from the DFB laser, showing lasing up to 55°C under CW operation; (c), (d) optical spectra of the single DFB laser operating just below threshold (c) and at a drive current of 80 mA (d).
The experimental value of $κ$ is estimated from the photonic bandgap width $λs=0.436 nm$ observable from the below-threshold amplified spontaneous emission (ASE) spectrum of Fig. 3(c). The total coupling strength $κL$ for this 1.5 mm long device was estimated to be $∼3.0$ ($κ=2.0$), which is consistent with the theoretical value when taking into consideration the deviations of rating duty cycle (1:1.9 as shown in the inset of Fig. 2(c)]. When increasing the current to 80 mA ($4.4×Ith$), the central defect mode seen in the ASE spectrum rises to be the dominant longitudinal mode with an SMSR of 52.7 dB [Fig. 3(d)].
Stable single-mode operation was observed at CW currents up to 150 mA in the temperature range of 20°C–50°C, with a linear wavelength–temperature tuning rate of 0.12 nm/°C and a quadratic wavelength–current tuning curve [Figs. 4(a) and 4(b)]. It is noteworthy that the laser maintains single-mode operation across the entire current range. This high single-mode quality is credited to the novel $α$-Si grating along the trapezoid waveguide, which affords reliable deterministic optical coupling.
Figure 4.(a) Wavelength shift with injection currents; (b) wavelength shift with heat-sink temperature; (c), (d) optical spectra and lasing frequencies of an LWDM DFB laser array measured at 100 mA.
Across each eight-device bar, channel spacing of $0.80±0.10 THz$ was measured, producing wavelengths ranging from 1300.05 to 1332.41 nm [Figs. 4(c) and 4(d)], matching well with the standard LWDM grid. This wavelength range was limited by the range of grating periods we fabricated. The high-precision EBL process together with the broad gain bandwidths of QD materials affords very promising applications for both LWDM and coarse wavelength division multiplexing within the O-band.
4. RELATIVE INTENSITY NOISE AND EXTERNAL FEEDBACK SENSITIVITY
The RIN across the frequency range of 2.5–20 GHz is assessed using a commercial system (SYCATUS A0010A), measuring $<−155 dB/Hz$ at $4×Ith$ and reducing to a saturated minimum level of $∼−165 dB/Hz$ at $9×Ith$, as shown in Fig. 5. The relaxation oscillation (RO) of this device is suppressed due to the large damping factor of the QD as reported [45]. To the best of our knowledge, this ultralow RIN is the best-reported result among QD DFB lasers and is in good agreement with reported values in both GaAs [20] and silicon [19] based QD-FP lasers. We consider that the high output power decreases the proportion of spontaneous emission, while the suitable grating $κ$ value ensures single-mode operation without mode hopping or spatial hole burning at high injection currents. Both can effectively suppress the disturbance of photons and carriers and thus yield an ultralow RIN.
Figure 5.Measured RIN spectra at several bias currents at 25°C.
Finally, the performance of the QD LC-DFB lasers under coherent optical feedback was assessed using the optical measurement setups shown schematically in Fig. 6(a). The emission from the QD laser AR facet is coupled into the feedback test system by a lensed fiber with coupling efficiency of 20%–30%, and divided into feedback and detection paths by a 90/10 fiber coupler. On the feedback path (10% of the coupled power), a fibered optical circulator is used to feed the light back to the laser cavity. Since the external cavity resonance frequency (17.25 MHz in this stage) is much less than the laser RO frequency, the impact of the feedback phase is negligible. The feedback strength $rext$ is defined as the ratio of the returning power to the laser free-space output power, and precise returning power can be obtained by calculating the product of the power (detected by a power meter) and the lensed fiber coupling efficiency. The optical feedback intensity is controlled by changing the operating current of a polarization-maintaining boost optical amplifier (BOA, Thorlabs S9FC1132P). A filter with 0.8 nm bandwidth is employed to spectrally suppress the ASE from the BOA. A polarization controller is inserted in the external cavity to compensate for the polarization rotation in the fiber. The insertion losses produced in the lens fiber, BOA, beam splitter, and each connector are carefully calibrated. The remaining 90% of the coupled power is sent to a high-resolution (0.03 nm) optical spectrum analyzer (OSA, Anritsu MS9740A) or RIN measurement system (SYCATUS A0010A) to monitor the evolution of spectra and RIN as the feedback strength $rext$ varies. For the whole measurement, the DFB laser is mounted on a thermo-electric cooler (TEC) operated at 25°C.
Figure 6.(a) Experimental setup used for the long-delay feedback measurements. LF, lens fiber; PM, power meter; BOA, boost optical amplifier; OSA, optical spectrum analyzer; RIN, relative intensity noise; PC, polarization controller; ISO, optical isolator; BPF, bandpass filter. (b) Evolution of the SMSR with increasing feedback strength; the inset is the optical spectrum of the DFB laser as the feedback strength increases. (c) Change of RIN in the same DFB laser under $2.5×Ith$, $3×Ith$, and $4×Ith$ current injections. The inset is the frequency domain plot of RIN as a function of increasing feedback strength.
Figures 6(b) and 6(c) present the evolution of SMSR and RIN with the laser operating at $4×Ith$. This device shows RO peaks at a low injection current, which move from 2 GHz to about 5.5 GHz with the increasing bias current [inset of Fig. 6(c)]. Little sign of deterioration is observed until the optical feedback levels reach $rext=5.9%$ ($−12.3 dB$). In particular, the SMSR of the laser is found to be still above 50 dB under this feedback strength. RIN levels (at the frequency of 5 GHz) rise only slowly until $rext=5.6%$ ($−12.5 dB$), without any visible periodic or chaotic oscillations in the RIN spectra. A sharp increase in RIN indicates a transition to the coherence collapse regime beyond this critical level of optical feedback $fext,c$. For DFB lasers, $fext,c$ is strongly associated with the normalized coupling coefficient $κL$ ($L$ is the length of the laser cavity), where an increased $κL$ will lead to an increased $fext,c$ [46,47].
5. DISCUSSION
Given the rapid development of QD-DFB lasers at 1310 nm on both GaAs and silicon substrates, it is useful to compare the performance of our device with those reported in the literature. Table 1 lists the threshold current (current density), power, highest work temperature, SMSR, RIN, and optical feedback tolerance of reported DFB lasers together with our device. In general, compared with commercial QW lasers, QD lasers exhibit excellent performance in terms of high operating temperature, low RIN, and high optical feedback tolerance owing to their stronger carrier confinement, larger damping factor, and smaller $α$ factor. QD lasers with buried [24,52] gratings show good feedback tolerance, but the RIN needs to be further improved. Our device simultaneously achieves high output power (26.6 mW), ultralow RIN ($−165 dB/Hz$), and high tolerance to optical feedback ($−12.3 dB$).
Comparison of the Performance of Our Device with Reference QD DFB Laser at 1310 nm
Year Substrate Grating $κ (mm−1)$ Threshold Current (mA) Threshold Current Density ($A/cm2$) Power (mW) SMSR (dB) $T$(°C) RIN (dB/Hz) Anti-feedback (dB) Ref. 2003 GaAs GaAs sidewall – 3 – 9.3 50 – – –14 [48] 2005 GaAs Metal sidawall – 5 – 12 $>50$ – – – [49] 2011 GaAs InGa/GaAs buried 4 6.8 – 10 45 80 – – [33] 2011 GaAs Cr sidewall – 18 1500 $>10$ 53 85 – –12 [50] 2014 GaAs InGa/GaAs buried 2.5 43.8 1830 34 58 60 – – [51] 2018 GaAs GaAs sidewall – 30 1710 23 51 – – – [37] 2018 GaAs InGa/GaAs buried 4 6.2 1107 20 $>40$ 70 –150 –8 [52] 2018 Heterogenous integrated GaAs/Si Si 7.7 9.5 205 2.5 47 100 – – [53] 2018 Monolithic integrated GaAS/Si GaAs sidewall 4.2 12 550 1.5 50 – – – [40] 2020 Monolithic integrated GaAs/Si AIGaAs/GaAs buried 4.5 20 440 4.4 50 70 – – [34] 2021 Monolithic integrated GaAs/Si Si 5 4 250 2.8 60 75 – – [54] 2021 Heterogeneous GaAs/oxide/Si Si 15 6.7 134 7 61 70 –125 – [55] 2021 GaAs InGaP/GaAs buried 1.6 9.3 – 15 50 55 –150 –6 [24] 2021 GaAs Amorphous Si sidewall 2.0 18 571 26.6 52.7 55 –165 –12.3 This work
To conclude, by implementing a novel first-order $α$-Si Bragg grating LC to a near-ideal trapezoid GaAs ridge waveguide, high-performance 1300 nm InAs/GaAs QD DFB laser arrays have been realized with high power, ultralow RIN, high robustness against optical feedback, and accurate LWDM grid. These excellent features make the device a very attractive candidate for high-performance digital and analog WDM optical transmission systems as well as on-chip sources for PICs where optical isolators are not readily available. In the near future, decreasing uniformities in the size or energy level of QDs while keeping the QD density [17], is expected to further improve the material modal gain and therefore thermal stability.
The novel grating structure affords accurate and deterministic grating coupling coefficient $κ$, which can be engineered independent of the laser material epitaxy process. The scheme can therefore be readily implemented on other material systems such as InP-, GaAs/Si- and GaSb-based compound semiconductor lasers, and in other wavelength windows by scaling the size of the grating and using low absorption dielectric materials for those wavelengths. The simplicity and versatility of the regrowth-free scheme make it possible to establish a new paradigm of semiconductor laser manufacturing. In this paradigm, multiple types of grating-based semiconductor lasers (including DFB, DBR, and tunable lasers) can be jointly manufactured on the same fabrication platform, with different active materials and operating wavelengths, which would potentially reduce their manufacturing cost very significantly.
APPENDIX A: CALCULATED AND EXPERIMENTAL VALUE OF κ
According to coupled mode theory, coupling coefficient $κ$ can be calculated by [42] $κ=n22−n12λ0neff·sin(πmΛ)m·Γ,$where $n1$ and $n2$ are the refractive index of $SiO2$ and $α$-Si grating, respectively. $λ0$ is the Bragg wavelength, $m=1$ is the grating diffraction order, $Λ$ is the duty cycle, and $Γ$ is the electric field overlap factor in the grating region. $neff$ is the effective refractive index of the mode traveling in the ridge waveguide. The average refractive index of the $α–Si/SiO2$ grating can be obtained by $navg=Λn22+(1−Λ)n12.$
The $neff$ and $Γ$ of TE00 mode can be obtained by solving the transverse light field distribution after replacing the grating index by $navg$. Table 2 illustrates the refractive index used in the theoretical calculation for the $α$-Si first-order LC grating. Figure 7(a) illustrates the dependence of coupling coefficient $κ$ on the grating duty cycle at different ridge waveguide etch depths. It can be seen that $κ$ is asymmetric with respect to the duty cycle and has a peak value at the etch depth of 1.7 μm. Figures 7(b) and 7(c) show coupling coefficient $κ$ as a function of grating thickness ($d$) and grating extrusion length ($l$) from the waveguide foot. It shows that the value of $κ$ saturates as $d$ or $l$ increases, which is expected since the optical field in the grating decays rapidly away from the foot of the ridge. For our fabricated devices with $d=150 nm$ and $l=8 μm$, $κ$ is not sensitive to the variation in $d$ and $l$, which should be preferable from the perspective of $κ$ stability.
Material Refractive Index Used in the Simulation at Wavelength of 1310 nm
MaterialRefractive Index
GaAs3.41
$Al0.4GaAs$3.25
$Al2O3$1.75
Si3.49
$SiO2$1.45
BCBa1.56
The material B-staged bisbenzocyclobutene (BCB, type: CYCLOTENE 4022-35) is used to planarize the laser ridge waveguide. Therefore, in our simulation, the background refractive index is set as 1.56.
Figure 7.Coupling coefficient $κ$ as a function of (a) grating duty cycle, (b) grating thickness, and (c) grating length.
Experimental grating coupling coefficient $κ$ is estimated from the bandgap width $λs=0.436 nm$ [as shown in Fig. 3(c)] by using the relation [56]$κ=(πngλsλB2)2−(πLg)2,$where $κ$ is the grating coupling coefficient, $ng$ is the group index, $λB$ is the lasing mode, and $Lg$ is the grating length. The total coupling strength $κL$ for the 1.5 mm long device was estimated to be $∼3.0$ ($κ=2.0$), which is very consistent with our theoretical design if taking into consideration the deviations of grating duty cycle [1:1.9 as shown in the inset of Fig. 2(c)]. Further increase in $κ$ by a factor of $>2$ is therefore possible by increasing the duty cycle or grating thickness.
References
[1] D. Botez, G. J. Herskowitz. Components for optical communications systems: a review. Proc. IEEE, 68, 689-731(1980).
[2] O. Brox, F. Bugge, A. Mogilatenko, E. Luvsandamdin, A. Wicht, H. Wenzel, G. Erbert. Distributed feedback lasers in the 760 to 810 nm range and epitaxial grating design. Semicond. Sci. Technol., 29, 095018(2014).
[3] E. Di Gaetano, S. Watson, E. McBrearty, M. Sorel, D. J. Paul. Sub-megahertz linewidth 780.24 nm distributed feedback laser for 87Rb applications. Opt. Lett., 45, 3529-3532(2020).
[4] V. Schkolnik, O. Hellmig, A. Wenzlawski, J. Grosse, A. Kohfeldt, K. Döringshoff, A. Wicht, P. Windpassinger, K. Sengstock, C. Braxmaier, M. Krutzik, A. Peters. A compact and robust diode laser system for atom interferometry on a sounding rocket. Appl. Phys. B, 122, 217(2016).
[5] Y. He, H. An, J. Cai, C. Galstad, S. Macomber, M. Kanskar. 808 nm broad area DFB laser for solid-state laser pumping application. Electron. Lett., 45, 163-164(2009).
[6] S. Stephan, D. Frederic, A. Markus-Christian. Novel InP- and GaSb-based light sources for the near to far infrared. Semicond. Sci. Technol., 31, 113005(2016).
[7] M. Hoppe, C. Aßmann, S. Schmidtmann, T. Milde, M. Honsberg, T. Schanze, J. Sacher. GaSb-based digital distributed feedback filter laser diodes for gas sensing applications in the mid-infrared region. J. Opt. Soc. Am. B, 38, B1-B8(2021).
[8] S. Najda, P. Perlin, M. Leszczyński, T. Slight, W. Meredith, M. Schemmann, H. Moseley, J. Woods, R. Valentine, S. Kalra, P. Mossey, E. Theaker, M. Macluskey, G. Mimnagh, W. Mimnagh. A multi-wavelength (u.v. to visible) laser system for early detection of oral cancer. Proc. SPIE, 9328, 932809(2015).
[9] T. Miyajima, T. Tojyo, T. Asano, K. Yanashima, S. Kijima, T. Hino, M. Takeya, S. Uchida, S. Tomiya, K. Funato, T. Asatsuma, T. Kobayashi, M. Ikeda. GaN-based blue laser diodes. J. Phys.: Condens. Matter, 13, 7099(2001).
[10] J. C. Palais. Fiber Optic Communications(1988).
[11] T. Sudo, Y. Matsui, G. Carey, A. Verma, D. Wang, V. Lowalekar, M. Kwakernaak, F. Khan, N. Dalida, R. Patel, A. Nickel, B. Young, J. Zeng, Y. L. Ha, C. Roxlo. Challenges and opportunities of directly modulated lasers in future data center and 5G networks. Optical Fiber Communications Conference and Exhibition (OFC), 1-3(2021).
[12] C. P. Hsu, B. Li, B. Solano-Rivas, A. R. Gohil, P. H. Chan, A. D. Moore, V. Donzella. A review and perspective on optical phased array for automotive LiDAR. IEEE J. Sel. Top. Quantum Electron., 27, 8300416(2021).
[13] D. N. Hutchison, J. Sun, J. K. Doylend, R. Kumar, J. Heck, W. Kim, C. T. Phare, A. Feshali, H. Rong. High-resolution aliasing-free optical beam steering. Optica, 3, 887-890(2016).
[14] M.-C. Amann, M. Ortsiefer. Long-wavelength (λ1.3μm) InGaAlAs–InP vertical-cavity surface-emitting lasers for applications in optical communication and sensing. Phys. Status Solidi A, 203, 3538-3544(2006).
[15] A. Liu, P. Wolf, J. A. Lott, D. Bimberg. Vertical-cavity surface-emitting lasers for data communication and sensing. Photon. Res., 7, 121-136(2019).
[16] H. Y. Liu, S. L. Liew, T. Badcock, D. J. Mowbray, M. S. Skolnick, S. K. Ray, T. L. Choi, K. M. Groom, B. Stevens, F. Hasbullah, C. Y. Jin, M. Hopkinson, R. A. Hogg. p-doped 1.3 μm InAs/GaAs quantum-dot laser with a low threshold current density and high differential efficiency. Appl. Phys. Lett., 89, 073113(2006).
[17] T. Kageyama, K. Nishi, M. Yamaguchi, R. Mochida, Y. Maeda, K. Takemasa, Y. Tanaka, T. Yamamoto, M. Sugawara, Y. Arakawa. Extremely high temperature (220°C) continuous-wave operation of 1300-nm-range quantum-dot lasers. The European Conference on Lasers and Electro-Optics, PDA_1(2011).
[18] Y.-G. Zhou, C. Zhou, C.-F. Cao, J.-B. Du, Q. Gong, C. Wang. Relative intensity noise of InAs quantum dot lasers epitaxially grown on Ge. Opt. Express, 25, 28817-28824(2017).
[19] M. Liao, S. Chen, Z. Liu, Y. Wang, L. Ponnampalam, Z. Zhou, J. Wu, M. Tang, S. Shutts, Z. Liu, P. M. Smowton, S. Yu, A. Seeds, H. Liu. Low-noise 1.3 μm InAs/GaAs quantum dot laser monolithically grown on silicon. Photon. Res., 6, 1062-1066(2018).
[20] A. Capua, L. Rozenfeld, V. Mikhelashvili, G. Eisenstein, M. Kuntz, M. Laemmlin, D. Bimberg. Direct correlation between a highly damped modulation response and ultra low relative intensity noise in an InAs/GaAs quantum dot laser. Opt. Express, 15, 5388-5393(2007).
[21] D. A. I. Marpaung. High dynamic range analog photonic links(2009).
[22] B. Dong, J.-D. Chen, F.-Y. Lin, J. C. Norman, J. E. Bowers, F. Grillot. Dynamic and nonlinear properties of epitaxial quantum-dot lasers on silicon operating under long- and short-cavity feedback conditions for photonic integrated circuits. Phys. Rev. A, 103, 033509(2021).
[23] H. Huang, J. Duan, B. Dong, J. Norman, D. Jung, J. E. Bowers, F. Grillot. Epitaxial quantum dot lasers on silicon with high thermal stability and strong resistance to optical feedback. APL Photon., 5, 016103(2020).
[24] B. Dong, J. Duan, H. Huang, J. C. Norman, K. Nishi, K. Takemasa, M. Sugawara, J. E. Bowers, F. Grillot. Dynamic performance and reflection sensitivity of quantum dot distributed feedback lasers with large optical mismatch. Photon. Res., 9, 1550-1558(2021).
[25] J. C. Norman, D. Jung, Y. Wan, J. E. Bowers. Perspective: the future of quantum dot photonic integrated circuits. APL Photon., 3, 030901(2018).
[26] C. Hantschmann, Z. Liu, M. Tang, S. Chen, A. J. Seeds, H. Liu, I. H. White, R. V. Penty. Theoretical study on the effects of dislocations in monolithic III-V lasers on silicon. J. Lightwave Technol., 38, 4801-4807(2020).
[27] J. C. Norman, D. Jung, Z. Zhang, Y. Wan, S. Liu, C. Shang, R. W. Herrick, W. W. Chow, A. C. Gossard, J. E. Bowers. A review of high-performance quantum dot lasers on silicon. IEEE J. Quantum Electron., 55, 2000511(2019).
[28] S. Chen, W. Li, J. Wu, Q. Jiang, M. Tang, S. Shutts, S. N. Elliott, A. Sobiesierski, A. J. Seeds, I. Ross, P. M. Smowton, H. Liu. Electrically pumped continuous-wave III–V quantum dot lasers on silicon. Nat. Photonics, 10, 307-311(2016).
[29] J. C. Norman, R. P. Mirin, J. E. Bowers. Quantum dot lasers—history and future prospects. J. Vac. Sci. Technol. A, 39, 020802(2021).
[30] D. Jung, R. Herrick, J. Norman, K. Turnlund, C. Jan, K. Feng, A. C. Gossard, J. E. Bowers. Impact of threading dislocation density on the lifetime of InAs quantum dot lasers on Si. Appl. Phys. Lett., 112, 153507(2018).
[31] T. Septon, A. Becker, S. Gosh, G. Shtendel, V. Sichkovskyi, F. Schnabel, A. Sengül, M. Bjelica, B. Witzigmann, J. P. Reithmaier, G. Eisenstein. Large linewidth reduction in semiconductor lasers based on atom-like gain material. Optica, 6, 1071-1077(2019).
[32] Z. Lu, K. Zeb, J. Liu, E. Liu, L. Mao, P. Poole, M. Rahim, G. Pakulski, P. Barrios, W. Jiang, D. Poitras. Quantum dot semiconductor lasers for 5G and beyond wireless networks. Proc. SPIE, 11690, 116900N(2021).
[33] K. Takada, Y. Tanaka, T. Matsumoto, M. Ekawa, H. Z. Song, Y. Nakata, M. Yamaguchi, K. Nishi, T. Yamamoto, M. Sugawara, Y. Arakawa. Wide-temperature-range 10.3 Gbit/s operations of 1.3 μm high-density quantum-dot DFB lasers. Electron. Lett., 47, 206-208(2011).
[34] Y. Wan, J. C. Norman, Y. Tong, M. J. Kennedy, W. He, J. Selvidge, C. Shang, M. Dumont, A. Malik, H. K. Tsang, A. C. Gossard, J. E. Bowers. 1.3 μm quantum dot-distributed feedback lasers directly grown on (001) Si. Laser Photon. Rev., 14, 2000037(2020).
[35] C. B. Cooper, S. Salimian, H. F. Macmillan. Reactive ion etch characteristics of thin InGaAs and AlGaAs stop-etch layers. J. Electron. Mater., 18, 619-622(1989).
[36] G. C. Desalvo, W. F. Tseng, J. Comas. ChemInform abstract: etch rates and selectivities of citric acid/hydrogen peroxide on GaAs, Al0.3Ga0.7As, In0.2Ga0.8As, In0.53Ga0.47As, In0.52Al0.48As, and InP. ChemInform, 23, 309(1992).
[37] Q. Li, X. Wang, Z. Zhang, H. Chen, Y. Huang, C. Hou, J. Wang, R. Zhang, J. Ning, J. Min, C. Zheng. Development of modulation p-doped 1310 nm InAs/GaAs quantum dot laser materials and ultrashort cavity Fabry–Perot and distributed-feedback laser diodes. ACS Photon., 5, 1084-1093(2018).
[38] S. Forouhar, R. M. Briggs, C. Frez, K. J. Franz, A. Ksendzov. High-power laterally coupled distributed-feedback GaSb-based diode lasers at 2 μm wavelength. Appl. Phys. Lett., 100, 031107(2012).
[39] S. Masui, K. Tsukayama, T. Yanamoto, T. Kozaki, S.-I. Nagahama, T. Mukai. CW operation of the first-order AlInGaN 405 nm distributed feedback laser diodes. Jpn. J. Appl. Phys., 45, L1223-L1225(2006).
[40] Y. Wang, S. Chen, Y. Yu, L. Zhou, L. Liu, C. Yang, M. Liao, M. Tang, Z. Liu, J. Wu, W. Li, I. Ross, A. J. Seeds, H. Liu, S. Yu. Monolithic quantum-dot distributed feedback laser array on silicon. Optica, 5, 528-533(2018).
[41] C. A. Yang, S. W. Xie, Y. Zhang, J. M. Shang, S. S. Huang, Y. Yuan, F. H. Shao, Y. Zhang, Y. Q. Xu, Z. C. Niu. High-power, high-spectral-purity GaSb-based laterally coupled distributed feedback lasers with metal gratings emitting at 2 μm. Appl. Phys. Lett., 114, 021102(2019).
[42] A. Laakso, J. Karinen, M. Dumitrescu. Modeling and design particularities for distributed feedback lasers with laterally-coupled ridge-waveguide surface gratings. Proc. SPIE, 7933, 79332K(2011).
[43] W. Streifer, D. Scifres, R. Burnham. Coupling coefficients for distributed feedback single- and double-heterostructure diode lasers. IEEE J. Quantum Electron., 11, 867-873(1975).
[44] W.-Y. Choi, J. C. Chen, C. G. Fonstad. Evaluation of coupling coefficients for laterally-coupled distributed feedback lasers. Jpn. J. Appl. Phys., 35, 4654-4659(1996).
[45] J. Duan, H. Huang, B. Dong, J. C. Norman, Z. Zhang, J. E. Bowers, F. Grillot. Dynamic and nonlinear properties of epitaxial quantum dot lasers on silicon for isolator-free integration. Photon. Res., 7, 1222-1228(2019).
[46] F. Grillot, B. Thedrez, D. Guang-Hua. Feedback sensitivity and coherence collapse threshold of semiconductor DFB lasers with complex structures. IEEE J. Quantum Electron., 40, 231-240(2004).
[47] Q. Zou, K. Merghem, S. Azouigui, A. Martinez, A. Accard, N. Chimot, F. Lelarge, A. Ramdane. Feedback-resistant p-type doped InAs/InP quantum-dash distributed feedback lasers for isolator-free 10 Gb/s transmission at 1.55 μm. Appl. Phys. Lett., 97, 231115(2010).
[48] H. Su, L. Zhang, A. L. Gray, R. Wang, T. C. Newell, K. J. Malloy, L. F. Lester. High external feedback resistance of laterally loss-coupled distributed feedback quantum dot semiconductor lasers. IEEE Photon. Technol. Lett., 15, 1504-1506(2003).
[49] H. Su, L. F. Lester. Dynamic properties of quantum dot distributed feedback lasers: high speed, linewidth and chirp. J. Phys. D, 38, 2112-2118(2005).
[50] S. Azouigui, D.-Y. Cong, A. Martinez, K. Merghem, Q. Zou, J.-G. Provost, B. Dagens, M. Fischer, F. Gerschütz, J. Koeth, I. Krestnikov, A. Kovsh, A. Ramdane. Temperature dependence of dynamic properties and tolerance to optical feedback of high-speed 1.3 μm DFB quantum-dot lasers. IEEE Photon. Technol. Lett., 23, 582-584(2011).
[51] M. Stubenrauch, G. Stracke, D. Arsenijević, A. Strittmatter, D. Bimberg. 15 Gb/s index-coupled distributed-feedback lasers based on 1.3 μm InGaAs quantum dots. Appl. Phys. Lett., 105, 011103(2014).
[52] M. Matsuda, N. Yasuoka, K. Nishi, K. Takemasa, T. Yamamoto, M. Sugawara, Y. Arakawa. Low-noise characteristics on 1.3-μm-wavelength quantum-dot DFB lasers under external optical feedback. IEEE International Semiconductor Laser Conference (ISLC), 1-2(2018).
[53] S. Uvin, S. Kumari, A. De Groote, S. Verstuyft, G. Lepage, P. Verheyen, J. Van Campenhout, G. Morthier, D. Van Thourhout, G. Roelkens. 1.3 μm InAs/GaAs quantum dot DFB laser integrated on a Si waveguide circuit by means of adhesive die-to-wafer bonding. Opt. Express, 26, 18302-18309(2018).
[54] Y. Wan, C. Xiang, J. Guo, R. Koscica, M. J. Kennedy, J. Selvidge, Z. Zhang, L. Chang, W. Xie, D. Huang, A. C. Gossard, J. E. Bowers. High speed evanescent quantum-dot lasers on Si. Laser Photon. Rev., 15, 210057(2021).
[55] D. Liang, S. Srinivasan, A. Descos, C. Zhang, G. Kurczveil, Z. Huang, R. Beausoleil. High-performance quantum-dot distributed feedback laser on silicon for high-speed modulations. Optica, 8, 591-593(2021).
[56] G. Liu, G. Zhao, J. Sun, D. Gao, Q. Lu, W. Guo. Experimental demonstration of DFB lasers with active distributed reflector. Opt. Express, 26, 29784-29795(2018).
Zhuohui Yang, Zhengqing Ding, Lin Liu, Hancheng Zhong, Sheng Cao, Xinzhong Zhang, Shizhe Lin, Xiaoying Huang, Huadi Deng, Ying Yu, Siyuan Yu. High-performance distributed feedback quantum dot lasers with laterally coupled dielectric gratings[J]. Photonics Research, 2022, 10(5): 1271
|
2022-12-03 15:33:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 154, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4758792221546173, "perplexity": 10647.007691529658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00067.warc.gz"}
|
https://zbmath.org/?q=an%3A0853.32011
|
# zbMATH — the first resource for mathematics
A basic course on differential modules. (English) Zbl 0853.32011
Maisonobe, Philippe (ed.) et al., $$D$$-modules cohérents et holonomes. Cours d’été du CIMPA ’Éléments de la théorie des systèmes différentiels’, août et septembre 1990, Nice, France. Paris: Hermann. Trav. Cours. 45, 103-168 (1993).
This paper is a very good introduction to the theory of $$\mathcal D$$-modules: modules over the (sheaf of) ring(s) $$\mathcal D$$ of linear differential operators on a complex analytic variety. Chapter I is devoted to the definition and first properties of the ring $$\mathcal D$$, first of all on $$\mathbb{C}^n$$ and then on a complex analytic variety $$X$$. The filtration by the order is introduced and, by using Oka’s theorem ($${\mathcal O}_X$$ is coherent) the coherence of $${\mathcal D}_X$$ is proved. The chapter ends with some results on the Weyl algebra on a field of characteristic zero. Chapter II is centered on the study of filtrations and good filtrations on (left) $$\mathcal D$$-modules. (In proposition 12, part 1, the hypothesis $$\mathcal M$$ is $$\mathcal D$$-coherent is missing and necessary). Locally good filtrations are characterized in terms of the coherence of the associated graded module. Chapter III deals with the main geometric object associated to a coherent $$\mathcal D$$-module $$\mathcal M$$: its characteristic variety, $$\text{Char} ({\mathcal M})$$. Bernstein’s inequality is proved. A complete (microlocal) proof of the involutivity of the characteristic variety is given in the appendix. This proof is due to Malgrange and simplifies the proof he gave at the “Séminaire Bourbaki” [B. Malgrange, Lect. Notes Math. 710, 277-289 (1979; Zbl 0423.46033)]. In chapter IV the general theory of holonomic $$\mathcal D$$-modules is developed. Coherent $$\mathcal D$$-modules with characteristic variety equal to $$T^*_X X$$ (i.e. the zero section of $$T^*X$$ the cotangent bundle to $$X$$) are characterized as locally free (of finite rank) $${\mathcal O}$$-modules endowed with a holomorphic integrable connection. As an application of this result it is proved that if $$\mathcal M$$ is $$\mathcal D$$-coherent and $$\text{Char}({\mathcal M}) = T^*_Y X$$ (i.e. the conormal bundle to a smooth hypersurface $$Y \subset X$$) then $$\mathcal M$$ is locally isomorphic to a direct sum of a finite number of copies of $${\mathcal O}[*Y]/{\mathcal O}$$ (here $${\mathcal O}[*Y]$$ denotes the sheaf of meromorphic functions with poles along $$Y$$). The chapter ends with a proof of the existence of a minimal polynomial for an endomorphism of a holonomic $$\mathcal D$$-module. Dimension and multiplicity (at a point of $$T^*X)$$ of a coherent $$\mathcal D$$-module, are studied in chapter V. In particular their behavior in exact sequences and, as an application, the proof that, for a holonomic $$\mathcal D$$-module $$\mathcal M$$, the $${\mathcal D}_x$$-module $${\mathcal M}_x$$ is of finite length, for all $$x \in X$$. The chapter ends with some results on good graded resolutions for coherent $$\mathcal D$$-modules, the homological dimension of the fibers of $$\mathcal D$$ (which is equal to $$\dim X$$) and the existence of a canonical filtration for each coherent $$\mathcal D$$-module. Chapter VI is an Epilogue. The authors give a proof (inspired by M. Kashiwara [ Invent. Math. 38, 33-53 (1976; Zbl 0354.35082)]) of the existence of the Bernstein-Sato polynomial associated to a germ of an analytic function $$f$$ on $$X$$. As an application they give a proof of the coherence and the holonomicity of $${\mathcal O}[1/f]$$ (the sheaf of meromorphic functions with poles along $$f = 0$$).
For the entire collection see [Zbl 0824.00033].
##### MSC:
32C38 Sheaves of differential operators and their modules, $$D$$-modules
|
2021-04-17 02:21:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8931054472923279, "perplexity": 293.97032039390524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00061.warc.gz"}
|
https://physics.stackexchange.com/questions/476497/quantum-mechanics-angular-momentum-spherical-tensor-components
|
# Quantum mechanics angular momentum spherical tensor components
In Sakurai Quantum Mechanics, problem 3.25b we imagine $$J_z^2$$ as the component of a tensor with components $$T_{ij} = J_iJ_j$$.
$$J_z^2 = \frac{1}{3}\pmb{J}^2 + (J_z^2 - \frac{1}{3}\pmb{J}^2)$$
The first term on the right side corresponds to the scalar part of the tensor, the second term corresponds to the symmetric part of tensor. In the solutions, they immedietly invoke that the second term transforms as a second rank tensor with magnetic quantum number $$0$$. (Notation as defined in Sakurai and Wikipedia)
$$J_z^2 = \pmb{T}^{(0)}_0 + \pmb{T}^{(2)}_0$$
I understand why this is rank two (because the symmetric part of the tensor is a rank two tensor), but why is the magnetic quantum number assumed to be $$0$$? Secondly, in general, how do I represent $$T_{ij}$$ in terms of the five spherical tensors, $${T}^{(0)}_0$$, $${T}^{(1)}_{\pm1,0}$$, and $${T}^{(2)}_{\pm2,\pm1,0}$$?
• You mean nine spherical tensors: 3 x 3 = 1 + 3 + 5. – JEB Apr 28 at 3:41
There are several approaches you can take for converting between cartesian and spherical tensors. You can start with the definition of spherical vectors:
$$\hat e^{\pm} = \frac 1 2 \mp[\hat x \pm i\hat y]$$ $$\hat e^0 = \hat z$$
and then make a correspondence with vector spins, $$|j,m\rangle$$:
$$\hat e^{\pm} \rightarrow |1,\pm 1\rangle$$ $$\hat e^0 \rightarrow |1, 0\rangle$$
and use Clebsch-Gordan coefficients to compute the addition of two vector spins, and then map those to spherical tensors:
$$|J,M\rangle \rightarrow T_M^J$$
For example:
$$|2, 2\rangle = |1,1\rangle|1,1\rangle$$
which means we can express a unit $$T_2^2$$ as a dyad:
$$\hat T_2^2 = e^+e^+=\frac 1 2 (-\hat x+i\hat y)(-\hat x+i\hat y) = \frac 1 2[\hat x\hat x - \hat y\hat y - i(\hat x\hat y+\hat y\hat x)]$$
So you can do that for every combination and invert the equations.
Or: you can look at Spherical harmonics in cartesian coordinates and then convert that to dyad (but only for natural form tensors):
$$T_1^2 \rightarrow Y_2^1 \propto -(x-iy)z \rightarrow -\hat xz + i\hat y z$$
and go from there, but normalizations and signs might no be as clear.
The result is that cartesian to spherical is as follows: The isotropic part is the trace of the cartesian tensor:
$$T_0^0 = \frac 1 3 T_{ii}$$
The J=1 are all from the antisymmetric part:
$$T_0^1 = \sqrt{\frac 1 2}[T_{xy}-T_{yx}]$$
$$T_{\pm 1}^1 = \mp \sqrt{\frac 1 2}[(T_{yz}-T_{zy})\pm i(T_{zx}-T_{xz}) ]$$
For the pure rank-2, aka natural form, you start with the symmetric trace free part:
$$S_{ij} = \frac 1 2 [T_{ij}-T_{ji}] - \frac 1 3 T_{ii}\delta_{ij}$$
as follows:
$$T_{\pm 2}^2 = \frac 1 2 [S_{xx}-S_{yy}\pm 2iS_{xy}]$$ $$T_{\pm 1}^2 = \frac 1 2 [S_{xz}+i S_{yz}]$$ $$T_0^2 = \sqrt{\frac 3 2}S_{zz}$$
The inversion of the pure rank 2 should be:
$${\bf S} = \left [ \begin{array}{ccc}T_2^2+T_{-2}^2-\sqrt{\frac 2 3}T_0^2 & -i(T_2^2-T_{-2}^2) & -T_1^2+T_{-1}^2 \\ -i(T_2^2-T_{-2}^2) & -T_2^2-T_{-2}^2-\sqrt{\frac 2 3}T_0^2 & i(T_1^2+T_{-1}^2)\\ -T_1^2+T_{-1}^2 & i(T_1^2+T_{-1}^2) & \sqrt{\frac 8 3}T_0^2 \end{array} \right ]$$
At higher rank, $$N$$, it gets difficult: you have to use standard young tableaux with $$N$$ boxes to get the irreducible representations of the permutation group on $$N$$ letters. Each of those, when applied to the indices, corresponds to a unique $$T_M^J$$ subspace (Schur-Weyl Duality).
At rank 3, you find:
$$\bf 3 \otimes 3 \otimes 3 = 7_S \oplus 3_S \oplus 5_M \oplus 3_M \oplus 5_M \oplus 3_M \oplus 1_A$$
where the subscripts refer to symmetric, mixed, and antisymmetric in indices, and the numbers are the dimension of the spherical tensor: $$2J+1$$.
So, for instance, $$\bf 1_A$$, is isotropic an proportional to $$\epsilon_{ijk}$$.
Meanwhile, the total symmetric part has 10 dimensions, 7 of which are pure rank, and 3 of which transform like a vector. To work that out, you start with the totally symmetric combinations of indices:
$$S_{ijk} = \frac 1 6 [T_{ijk} +T_{ski} +T_{kij} +T_{kji} +T_{ikj} +T_{jik}]$$
and subtract the trace:
$$N_{ijk} = S_{ijk} - \frac 3 5 \delta_{ij}V_k$$
where:
$$V_k = S_{iij} = S_{iji} = S_{jii}$$
becomes one of the $$T_M^1$$ components.
The pure rank 3 solution is:
$$T_{\pm 3}^3 = \frac 1 {\sqrt 8}[(-N_{xxx}+3N_{xyy})\mp iN_{yyy}]$$ $$T_{\pm 2}^3 = \frac 1 2[N_{xxz}-N_{yyz})\mp 2iN_{xyz}]$$ $$T_{\pm 1}^3 = \frac {\sqrt 15}3\big [ \frac 1 {\sqrt 2}[\mp N_{xzz}-iN_{yzz}]+ \frac 1 {\sqrt 8} [\mp(N_{xxx}-N_{xxy})+ i(N_{yyy}\pm N_{xxy})]\big ]$$ $$T_0^3 = \frac{\sqrt{10}} 3[\frac 1{\sqrt 2}(N_{xzz}-iN_{yzz})+ N_{zzz}]$$
The (anti-)symmetric tensor correspond to the index permutation derived from the standard Young Tableaux from the integer partitions of $$(1+1+1=3)$$ and $$3=3$$, which is similar to the rank two case.
The mixed symmetry rotationally invariant subspaces correspond to $$3=2+1$$, for which there are two standard Tableaux, leading to:
$$T^{(0,1;2)}=\frac 1 3 [T_{ijk}+T_{jik}-T_{kji}-T_{kij}]$$ $$T^{(0,2;1)}=\frac 1 3 [T_{ijk}+T_{kji}-T_{jik}-T_{jki}]$$
Each of these has 8 = 3 + 5 degrees of freedom, corresponding to a $$J=2$$ and $$J=1$$ part. The vector part ($$J=1$$) is found by taking the non-zero trace:
$$v^1_i = T^{(0,1;2)}_{ijj} = - T^{(0,1;2)}_{jji}$$ $$v^2_i = T^{(0,2;1)}_{ijj} = - T^{(0,2;1)}_{jij}$$
Those can be subtracted (with suitable a outer product with $$\delta_{ij}$$) to leave a 5 DoF object that transforms like a rank-2 tensor.
• Thank you so much! This is super helpful – Shep Bryan Apr 28 at 20:46
|
2019-10-17 06:43:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 55, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436318874359131, "perplexity": 420.19040643049794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672723.50/warc/CC-MAIN-20191017045957-20191017073457-00246.warc.gz"}
|
http://math.stackexchange.com/questions/193958/compute-int-0-infty-e-yya-mathrmd-y
|
# Compute $\int_{0}^{+\infty} e^{-y}y^a\mathrm{d} y$
Does the following integral have a finite value? How to compute it? $$\int_{0}^{+\infty} e^{-x^k}\mathrm{d} x$$ where $k$ is given and $0<k<1$. By substituting $x^k=y$ we may obtain an equivalent integral $$\int_{0}^{+\infty} e^{-y}y^a\mathrm{d} y$$ where $a>0$ is given.
-
Take a look at en.wikipedia.org/wiki/Gamma_function – Eric Naslund Sep 11 '12 at 4:19
That looks like a gamma function. – Tunococ Sep 11 '12 at 4:20
@Tunococ: 11 seconds :) – Eric Naslund Sep 11 '12 at 4:20
Existence does not require knowing about the Gamma function. Our integrand behaves nicely near $0$, and in the long run decays (far) more rapidly than $1/x^2$. – André Nicolas Sep 11 '12 at 4:55
@AndréNicolas: I got your meaning that once the integrand decays faster than $1/x^2$ and then the integral has a finite value. Is it correct in the long run the integrand decays more rapidly than any $1/x^k$ with $k>0$? – Shiyu Sep 11 '12 at 6:27
|
2015-07-04 23:16:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.974458634853363, "perplexity": 462.0719411397542}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096991.38/warc/CC-MAIN-20150627031816-00071-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://gamedev.stackexchange.com/questions/177329/unwanted-rotations-with-transform-rotatearound
|
# Unwanted rotations with transform.RotateAround
I didn't tough i would have issues with this, but unfortunatly I did. I have a Character (which is more a pile of cube) with a body, a empty GameObject for each leg attached to the body with a join and 2 cubes for leg too. The body and the two empty gameobject are child of a Empty gamebject. The 2 cube that represent leg are both child of one of the empty gameObject. What I want to achieve is to rotate the leg around the position of its empty parent by doing this:
if(Input.GetKey("l"))
{
l_leg.transform.RotateAround(l_leg.transform.parent.transform.position, new Vector3(1,0,0), 75 * Time.deltaTime);
}
unfortunatly, I only want it to rotate on x, but all of the axis are changing, which lead to unwanted rotation. Now, I know rotations aren't independant like translations, but I don't know how to fix this. If possible, i'd prefer to rotate locked joint via script but I have now idea how, I haven't seen anything about it when I looked up]2
• Have you considered just rotating the empty parent instead, about its local x axis, no RotateAround required? – DMGregory Nov 26 '19 at 13:49
• Yes but since I want the legs to be affected by gravity, I add a rigidbody, and to connect the legs to the body, I use joints on the empties that i connect to the body object, so I can't rotate the empty. – Samuel Fyckes Nov 26 '19 at 21:57
• Sounds like you want to be rotating them with joint forces instead then. Want to add that detail to your question? – DMGregory Nov 26 '19 at 22:12
• Wait, is there anyway to lock the joint but to still rotate it on an axis with script ? – Samuel Fyckes Nov 26 '19 at 22:46
|
2020-01-27 08:39:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33477476239204407, "perplexity": 922.9819375958991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00091.warc.gz"}
|
http://mathhelpforum.com/discrete-math/173240-binomial-thm.html
|
1. ## Binomial Thm
Use the binomial theorem to Prove:
The sum as k=0 --> n OF (nCk)*2^k = 3^n
Thanks...
2. So you want to prove that
$\displaystyle\sum_{k=0}^{n}{n\choose k} 2^{k}=3^{n},$ right?
Have you had any ideas so far?
3. Yes thats correct. And no, I don't see what to do...
4. Originally Posted by jzellt
And no, I don't see what to do...
Hint: 1 to any power is 1, i.e. $1 = (1)^{n-k}$.
5. I just really don't see this one. Is it possible for someone to show me the proof? Thanks
6. $(a+b)^n=\displaytype{\sum_{k=0}^n \binom{n}{k}b^ka^{n-k}}$
consider a=1 and b=2
7. I think I can go from here... Thanks!
|
2016-08-27 23:08:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650336503982544, "perplexity": 1163.8360645301063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982931818.60/warc/CC-MAIN-20160823200851-00176-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://expskill.com/question/what-is-the-value-of-int_23cos%E2%81%A1x-frac-3x4dx/
|
# What is the value of $$\int_2^3$$cos(x)-$$\frac {3}{x4}$$dx .
Category: QuestionsWhat is the value of $$\int_2^3$$cos(x)-$$\frac {3}{x4}$$dx .
Editor">Editor Staff asked 11 months ago
What is the value of $$\int_2^3$$cos(x)-$$\frac {3}{x4}$$dx .
(a) sin (3) – sin (2)
(b) sin (3) – sin (9) – $$\frac {19}{288}$$
(c) sin (8) – sin (2) – $$\frac {19}{288}$$
(d) sin (3) – sin (2) – $$\frac {19}{288}$$
This question was posed to me during an interview for a job.
This intriguing question comes from Definite Integral in chapter Integrals of Mathematics – Class 12
NCERT Solutions for Subject Clas 12 Math Select the correct answer from above options
Interview Questions and Answers, Database Interview Questions and Answers for Freshers and Experience
Right option is (d) sin (3) – sin (2) – $$\frac {19}{288}$$
Best explanation: $$\int_2^3$$cos(x)-$$\frac {3}{x4}$$dx = $$\int_2^3$$sin(x) dx + $$\int_2^3 \frac {3}{4}$$x^-3 dx
= (sin (3) + $$\frac {3}{4}$$3^-3) – (sin (2) + $$\frac {3}{4}$$2^-3)
= sin (3) – sin (2) – $$\frac {19}{288}$$
|
2022-12-04 04:48:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5815160870552063, "perplexity": 6215.253208075924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710962.65/warc/CC-MAIN-20221204040114-20221204070114-00240.warc.gz"}
|
http://www.acmerblog.com/POJ-2279-Mr-Young%27s-Picture-Permutations-blog-674.html
|
2013
11-11
# Mr. Young’s Picture Permutations
Mr. Young wishes to take a picture of his class. The students will stand in rows with each row no longer than the row behind it and the left ends of the rows aligned. For instance, 12 students could be arranged in rows (from back to front) of 5, 3, 3 and 1 students.
X X X X X
X X X
X X X
X
In addition, Mr. Young wants the students in each row arranged so that heights decrease from left to right. Also, student heights should decrease from the back to the front. Thinking about it, Mr. Young sees that for the 12-student example, there are at least two ways to arrange the students (with 1 as the tallest etc.):
1 2 3 4 5 1 5 8 11 12
6 7 8 2 6 9
9 10 11 3 7 10
12 4
Mr. Young wonders how many different arrangements of the students there might be for a given arrangement of rows. He tries counting by hand starting with rows of 3, 2 and 1 and counts 16 arrangements:
123 123 124 124 125 125 126 126 134 134 135 135 136 136 145 146
45 46 35 36 34 36 34 35 25 26 24 26 24 25 26 25
6 5 6 5 6 4 5 4 6 5 6 4 5 4 3 3
Mr. Young sees that counting by hand is not going to be very effective for any reasonable number of students so he asks you to help out by writing a computer program to determine the number of different arrangements of students for a given set of rows.
The input for each problem instance will consist of two lines. The first line gives the number of rows, k, as a decimal integer. The second line contains the lengths of the rows from back to front (n1, n2,…, nk) as decimal integers separated by a single space. The problem set ends with a line with a row count of 0. There will never be more than 5 rows and the total number of students, N, (sum of the row lengths) will be at most 30.
The output for each problem instance shall be the number of arrangements of the N students into the given rows so that the heights decrease along each row from left to right and along each column from back to front as a decimal integer. (Assume all heights are distinct.) The result of each problem instance should be on a separate line. The input data will be chosen so that the result will always fit in an unsigned 32 bit integer.
1
30
5
1 1 1 1 1
3
3 2 1
4
5 3 3 1
5
6 5 4 3 2
2
15 15
0
1
1
16
4158
141892608
9694845
//* @author: ccQ.SuperSupper
import java.math.*;
import java.util.*;
public class Main {
/**
* @param args
*/
public static void main(String[] args) throws Exception{
// TODO Auto-generated method stub
BigInteger num ,b;//= new BigInteger;
num=BigInteger.valueOf(1);
Scanner cin = new Scanner(System.in);
int n,i,j,sum,t;
int way[] = new int[100];
int flag[][] = new int[100][100];
while(cin.hasNext())
{
n = cin.nextInt();
if(n==0) break;
for(i=1,sum=0;i<=n;++i)
{
way[i]=cin.nextInt();
sum+=way[i];
}
num = BigInteger.valueOf(1);
for(i=2;i<=sum;++i)
{
b = BigInteger.valueOf(i);
num = num.multiply(b);
}
//System.out.println(num);
for(i=1;i<=n;++i) for(j=1;j<=way[i];++j)
{
sum=way[i]-j;
for(t=i+1;t<=n;++t) if(way[t]>=j) sum++;
sum++;
b = BigInteger.valueOf(sum);
//System.out.println(b);
num = num.divide(b);
}
System.out.println(num);
}
}
}
1. 是穷举,但是代码有优化(v数组),并不是2^n。测试数据应该没问题,之前有超时的代码。
2. 第一句可以忽略不计了吧。从第二句开始分析,说明这个花色下的所有牌都会在其它里面出现,那么还剩下♠️和♦️。第三句,可以排除2和7,因为在两种花色里有。现在是第四句,因为♠️还剩下多个,只有是♦️B才能知道答案。
3. 我没看懂题目
2
5 6 -1 5 4 -7
7 0 6 -1 1 -6 7 -5
我觉得第一个应该是5 6 -1 5 4 输出是19 5 4
第二个是7 0 6 -1 1 -6 7输出是14 7 7
不知道题目例子是怎么得出来的
|
2017-08-17 09:42:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3535347580909729, "perplexity": 411.5023008465069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103167.97/warc/CC-MAIN-20170817092444-20170817112444-00261.warc.gz"}
|
https://enpiar.com/distro/
|
The goal of distro is to provide a standardized interface to version and other facts about the current system’s Linux distribution. It is similar in spirit (though far more limited in scope) to the Python distro package.
Different Linux distributions and versions record version information in a number of different files and commands. The lsb_release command line utility standardizes some of the access to this information, but it is not guaranteed to be installed. This package draws from the various possible locations of version information and provides a single function for querying them.
## Installation
To install distro from CRAN,
You can install a development version with:
remotes::install_github("nealrichardson/distro")
## Example
There is only one public function in the package:
distro::distro()
# $id # [1] "ubuntu" # #$version
# [1] "16.04"
#
# $codename # [1] "xenial" # #$short_version
# [1] "16.04"
## Contributing
Does distro fail to produce the expected result on your system? We’ve tried to make it easy to extend the tests to accommodate new distributions and ways of expressing distribution information. That way, you can add information from your system to the tests as a way of setting up a minimum reproducible example.
• If your system has lsb_release installed, see tests/test-lsb-release.R for how to record the results of the command with different flags.
• If your system does not have lsb_release, you probably have an /etc/os-release file. Copy the contents of your /etc/os-release to the tests/os-release directory and we can set up a test using that.
• If your system has neither of those but has an /etc/system-release file, see tests/test-system-release.R for how to provide the contents of that file in a test
• If your system has none of these, please open an issue!
|
2021-04-19 20:46:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.335615336894989, "perplexity": 1678.8602809260003}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00147.warc.gz"}
|