url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://dsp.stackexchange.com/tags/octave/hot
# Tag Info 9 This comes from music terminology. The name "octave" comes from the fact that in the heptatonic musical scales (which are the prevalent scales in western music), the note with a 2:1 frequency ratio is the eighth note in the scale. For example, in the C major scale (C D E F G A B C) the eighth note is one octave above / has a 2:1 frequency ratio with the ... 4 I am afraid that it is rather impossible without a proper hardware. Sweep sine is ok as a general method, but you would either need: Reference transducer with known (preferably) linear frequency response. Then you can find the difference between those two. Signal actuator with known frequency response, and then you can also find the difference between ... 4 Frequency is the derivative of phase, so you need to start with the expression for frequency, which in your case is $f=200\exp(-\alpha*t)$ At time $t=0$ this will have the frequency of 200, but we need to solve for $\alpha$ so that the frequency at time $t=2$ is 20. Solving for $\alpha$ gives $\alpha=1.1513$. To get the correct expression for the phase you ... 4 Dynamic Time Warping is pretty well explained on this site. I'll use some of the diagrams from the PPT on that site to explain. The idea is to divide the signals into segments (frames) and then compare frames sequentially through each signal. As illustrated below, motion from a segment in one signal to the next segment depends on the similarity to the ... 3 You're trying to solve what's called Perona Malik Non Linear Diffusion Problem (Sometimes people call it, by mistake, Anisotropic Diffusion). Anyhow, the simplest code for that is Anisotropic Diffusion (Perona & Malik) on The MATLAB File Exchange. There is a more advanced (Anisotropic for real) algorithm in Fast Anisotropic Curvature Preserving ... 3 -6 dB / octave is trivial. a common reference for a pinking filter is http://www.firstpr.com.au/dsp/pink-noise/ 3 My guess is that imshow, for some reason, has a lower color depth at which it displays greyscale images than your image viewer. The contours are actually just the quantization steps of the lower number of shades displayable. 3 The Frequency Response of a system is the Fourier Transform of the Impulse Response of the system. Since we're in the real world and we have finite number of samples observed over finite time interval we use the Discrete Fourier Transform (DFT). In Ocatve and MATLAB the DFT is implemented in efficient way using the Fast Fourier Transform (FFT). In order to ... 3 Convolution and polynomial multiplication are equivalent by definition. Offsets are usually introduced through indexing. The treatment of ends is important. 3 Your image is an indexed image. Meaning it takes integer values that are supposed to be mapped to colours on screen via a particular colourmap. You can see this if you read your image in as: [I, M] = imread('dots_3gray.png'); where I becomes your 'indexed' image (which seems to contain indices from 0 to 158), and M is the colormap used to interpret it. ... 3 I simply use unwrapped atan2(IQ(i)) - atan2(IQ(i-1)) to estimate a discrete derivative, then low pass filter to below 15 kHz. Although with a shallow slope, the 1st order approximation to atan() given by Boschen will work just as well. Your noise might be due to not unwrapping the phase delta, or to not low pass filtering after doing the phase ... 3 Fat32's answer is correct and shows a common pitfall. The reason that you must do this is because recall that the output of the autocorrelation of a signal of length $N$ is $2N - 1$. You were performing the FFT with the original sample size of $N = 1000$, effectively destroying necessary information to retrieve the autocorrelation of size $2N - 1 = 1999$. 2 Here a short example of how to deal with FFT in octave/matlab: function freq(npts, len, basefreq) x = [0 : npts - 1] / npts * len; y = sin(basefreq * x * 2 * pi); F = abs(fft(y)); half = npts / 2; F = F(1 : half); plot([0 : half - 1] / len, F); printf("Base frequency = %g\n", (find(F == max(F)) - 1) / len); endfunction try this ... 2 A helpful construction is that of a ,,convolution unit''. If you find a signal that, convolved by itself, stays identical, then you know a lot about how your convolution algorithm works. Note that while all discrete units (all ones surrounded by zeros) are convolution units, certain implementations might introduce padding or offsets so that the resulting ... 2 reference : Digital Image processing - Rafael Gonzalez Steps : The whole of filtering process can be summarized in the following points : 1> Given an input image f(x,y) of size M X N, obtain the padding parameters P and Q as P =2M and Q = 2N The problem with using DFT is that both the input and output images are periodic. This causes interference between ... 2 The firsts steps by Tzanetakis can be described as an Scheirer algorithm(Tempo and beat analysis of acoustic musical signals by Eric Sheirer), you can use DTW, filterbanks or FFT to split your signal in N subbands! I'll try to help you with what I did in the past, my steps are described below. I can show a simple example of 6(six) (200, 400, 800, 1600, ... 2 1) A Hilbert transform has a very very long impulse response (above some given noise floor), so you need a ton more data to manufacture an analytic signal, otherwise you won't have enough to span the width of the Hilbert impulse response filter without serious edge truncation effects. 2) Instantaneous frequency estimates from this type of artificially ... 2 Final step is pretty straightforward. All you need to do is to apply the Hilbert Transform to each IMF and extract the instantaneous frequency from analytical signal. Instantaneous frequency is given by: $$\omega(t)=\dfrac{d\phi(t)}{dt}$$ where $\phi(t)=\mathrm{arg}[x_a(t)]$ (unwrapped phase of the analytical signal). Keep in mind that MATLAB (Octave) ... 2 Yes, this code is correct for implementing a moving average filter. Nevertheless I do recommend to use the in-built smooth function in MATLAB. y = smooth(x, 100, 'moving') 2 here's what to do for -3 dB/octave (what i remember from 1985): first take your 1/sqrt(f) magnitude function and inverse-warp that frequency response to what it will look like in the analog s-domain. instead of only a -3 dB/oct ramp (which is what you after BLT frequency warping) you have a -3 dB/oct ramp that starts to level out a little in the s-domain. ... 2 I think the solution is to use $\log_{10}(y)$ instead of $\log(y)$. log(y) in octave is the natural $\log$, but you want a log base $10$, which is log10(y). Confirming this: $20\log_{10}(0.5) = -6 \textrm{ dB/octave}$ $20\ln(0.5) = \frac{-13.86}{20} \textrm{ nepers/octave}$ I am using $\textrm{ln}$ for natural log to avoid any confusion. 2 For the sake of completeness, here is my solution: fs = 44100; # sampling frequency y = wavread('i-rsp.wav'); # read wav file b = y(:, [1]); # use first channel for analysis figure(1); # plot the impulse response plot(b, 'marker', '*'); # ... [h, w] = freqz (b, 1, 512, fs); #... 2 It seems like it is a bug. Octave's wavwrite() function writes 32-bit wav in 32 bits int (type 1 format) instead of normalized 32-bit floats (type 3 format). Unfortunately, 1 can't be represented in 32-bit type 1 format which is why you get this result. The solution could be to use the audiowrite() function, or to write 0x7fffffff instead of 1. More about ... 2 Problem solved. Looks like calculation of Os ($\omega_s$) was the culprit for this problem. Proper equation is: Os=Om*sqrt((gm^2-g1^2)*(1-gm^2))/(1-gm^2); instead of: Os=Om*(sqrt((gm^2-g1^2)*(1-gm^2))/(1-gm^2)); 2 You are essentially seeing a Cascade-Integrator-Comb (CIC) response which is identical to a moving average filter (Aliased Sinc function magnitude response) as seen with CIC filter structures. Consider what is happening in units of phase: You start with a white noise signal which is translated from magnitude directly to units of frequency in the FM ... 2 Is there any possibility to implement a speech recognition integrated with google voice that converts speechs in text in GNU Octave? Yes. The Google Cloud command line utility returns results in JSON. Octave is working on its JSON support, but until then you can use something like this. What we are missing now is triggering the whole process from within ... 2 Your code works fine. But for the sake of demonstration clarity, just get rid of all the fftshift functions and change your frequency range too. The main problem is that you should use FFT sizes when calling fft() functions, which by default uses signal length as FFT size which was the problem you faced on the following line : ### Energy Spectral Density ... 1 A. Tanenbaum is credited with: The nice thing about standards is that you have so many to choose from Two main options, for PCM or other formats: 1) the reading is not fully adapted 2) the file is not fully compatible. You can try with other higher-level audio functions like described in 33.1 Audio File Utilities. And cross-check with hexadecimal ... 1 I hope I understand what your'e asking correctly To get the minimum slope of the image I would run an calculate the slope at each point (should be fast enough) and than take the minimal one. For example : x = 1:.05:3000; y = 100+x + 300*sin(2*pi*x/1000); origin = [0 0] ; % or origin = [ x(1) y(1) ]; slope = (y - origin(2)) ./ (x - origin(1)); [ minslope ,... 1 Presuming you are using Matlab or GNU Octave, then yes, that should be OK. Compare with this tutorial on doing a moving average with convolution. You've done the division on your kernel, but it makes no mathematical difference whether you do it there or after the convolution. Practically, it takes less time if you do it your way (scaling the kernel.) You ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-04-19 00:40:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6850290894508362, "perplexity": 1186.6570509829705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038862159.64/warc/CC-MAIN-20210418224306-20210419014306-00278.warc.gz"}
https://engineering.monstar-lab.com/2020/08/12/Let-it-Flow
I’ve been working with Flow on production for almost a year now. The light weight kotlin coroutines stream library has completely replaced RxJava for our new projects. RxJava is still a trend for Android Development even in 2020. But this article is not another RxJava x Coroutines. It is here to show that you can use Flow with almost anything in the Android framework. Similar to when RxJava started to become a trend in Android development, there were also tons of libraries that would make anything on Android into an RxJava observer. RxBindings, RxActivityResult, RxPermissions, RxSharedPreferences, you name it. Can the same be done using Flow? Yes of course. And with the power of Kotlin, easier, more comprehensible and lighter. Before we begin I would like to make a small disclaimer. Using flow for these scenarios can be an overkill and we don’t actually need any threading for the below examples. If the scope is not handled properly it can cause memory leaks. dependencies { ... implementation "org.jetbrains.kotlinx:kotlinx-coroutines-core:${coroutinesVersion}" implementation "org.jetbrains.kotlinx:kotlinx-coroutines-android:${coroutinesVersion}" // Not required, used in this example to use lifecycleScope available from version 2.2.0 and onwards implementation "androidx.lifecycle:lifecycle-runtime-ktx:${lifecycleVersion}" } Let’s think of a simple example first: Click listener. fun View.onClickFlow(): Flow<View> { return callbackFlow { setOnClickListener { offer(it) } awaitClose { setOnClickListener(null) } } } lifecycleScope.launch { btn.onClickFlow() .collect { view -> Toast.makeText(view.context, "Clicked", Toast.LENGTH_SHORT).show() } } Now we can use all Flow operators on a view click listener. Let’s try with a TextWatcher: fun EditText.afterTextChangedFlow(): Flow<Editable?> { return callbackFlow { val watcher = object : TextWatcher { override fun afterTextChanged(s: Editable?) { offer(s) } override fun beforeTextChanged(s: CharSequence?, start: Int, count: Int, after: Int) {} override fun onTextChanged(s: CharSequence?, start: Int, before: Int, count: Int) {} } addTextChangedListener(watcher) awaitClose { removeTextChangedListener(watcher) } } } lifecycleScope.launch { editText.afterTextChangedFlow() .collect { textView.text = it } } Now we can add debounce, filter, map, whatever we want, let’s try it. lifecycleScope.launch { editText.afterTextChangeFlow() .debounce(1000) .collect { textView.text = it } } Adding debounce makes the flow wait an amount of time before emitting values. This is a very common use case for search operations that require network requests as an example. Enough with View flows, let’s try applying the same concept on SharedPreferences: fun <T> SharedPreferences.observeKey( key: String, default: T ): Flow<T> { return callbackFlow { send(getItem(key, default)) val listener = SharedPreferences.OnSharedPreferenceChangeListener { _, k -> if (key == k) { offer(getItem(key, default)) } } registerOnSharedPreferenceChangeListener(listener) awaitClose { unregisterOnSharedPreferenceChangeListener(listener) } } } fun <T> SharedPreferences.getItem(key: String, default: T): T { @Suppress("UNCHECKED_CAST") return when (default) { is String -> getString(key, default) as T is Int -> getInt(key, default) as T is Long -> getLong(key, default) as T is Boolean -> getBoolean(key, default) as T is Float -> getFloat(key, default) as T is Set<*> -> getStringSet(key, default as Set<String>) as T is MutableSet<*> -> getStringSet(key, default as MutableSet<String>) as T else -> throw IllegalArgumentException("generic type not handled") } } lifecycleScope.launch { launch { repeat(10) { delay(300) sharedPreferences.edit { putString("key", "Counting$it") } } } sharedPreferences.observeKey("key", "") .collect { string -> textView.text = string } } Super easy right? How about BroadcastReceiver? Why not. fun broadcastReceiverFlow(c: Context, intentFilter: IntentFilter): Flow<Intent> { return callbackFlow { override fun onReceive(context: Context, intent: Intent) { offer(intent) } } awaitClose { } } } That’s all for today’s blog post, see you soon! Article photo by SpaceCitySpin ### You may also like ##### Make apps for everyone The human brain tends to assume that everybody else thinks and behaves in a similar way to it. So every person tends to think that, based on certain categories, they’re part of a majority, that there are many others like them. As software developers, it’s very easy to assume that... ##### Locust sketch Locust is an easy-to-use distributed load testing tool that is completely event-based, that is, a locust node can also support thousands of concurrent users in a process, without callbacks, and use a lightweight process through gevent (that is, in Run in its own process). And has the following characteristics:
2020-09-26 12:58:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33490175008773804, "perplexity": 3426.3617269425527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400241093.64/warc/CC-MAIN-20200926102645-20200926132645-00621.warc.gz"}
http://www.physicsforums.com/showthread.php?t=225665
HW Helper P: 6,202 You know that $$T=2\pi \sqrt{\frac{l}{g}}$$ so that $$T^2=4\pi^2 \frac{L}{g}$$ and that means that $$g=\frac{4\pi^2}{T^2}L$$ To find the error you do this. $$\frac{\delta g}{g}=2\frac{\delta T}{T} + \frac{\delta L}{L}$$ $\delta T$ would be the error in T and similarly for $\delta L$ is the error in L.
2014-08-27 11:07:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6610206961631775, "perplexity": 206.4268193291836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829210.91/warc/CC-MAIN-20140820021349-00118-ip-10-180-136-8.ec2.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/0582
Lemma 10.10.1. Exactness and $\mathop{\mathrm{Hom}}\nolimits _ R$. Let $R$ be a ring. 1. Let $M_1 \to M_2 \to M_3 \to 0$ be a complex of $R$-modules. Then $M_1 \to M_2 \to M_3 \to 0$ is exact if and only if $0 \to \mathop{\mathrm{Hom}}\nolimits _ R(M_3, N) \to \mathop{\mathrm{Hom}}\nolimits _ R(M_2, N) \to \mathop{\mathrm{Hom}}\nolimits _ R(M_1, N)$ is exact for all $R$-modules $N$. 2. Let $0 \to M_1 \to M_2 \to M_3$ be a complex of $R$-modules. Then $0 \to M_1 \to M_2 \to M_3$ is exact if and only if $0 \to \mathop{\mathrm{Hom}}\nolimits _ R(N, M_1) \to \mathop{\mathrm{Hom}}\nolimits _ R(N, M_2) \to \mathop{\mathrm{Hom}}\nolimits _ R(N, M_3)$ is exact for all $R$-modules $N$. Proof. Omitted. $\square$ There are also: • 1 comment(s) on Section 10.10: Internal Hom In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2021-05-09 17:09:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9738184809684753, "perplexity": 311.27285557355924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00262.warc.gz"}
https://quant.stackexchange.com/questions/63418/spot-the-mistake-in-final-step-of-bs-solution-via-pde-approach
# Spot the mistake in final step of BS solution via PDE approach! • Doing last step -- un-change of variable, where in my case I have $$k = -\frac{2r}{\sigma^{2}},$$ $$v(\tau, x) = u(\tau, x) \cdot \exp\left(-\frac{1}{4}(k+1)^{2} \tau - \frac{1}{2}(k-1)x\right),$$ $$x = \ln\left(\frac{S}{F}\right),$$ $$\tau = \frac{\sigma^{2}}{2}(T-t),$$ $$C = F \cdot v(\tau,x),$$ getting $$C(t,S) = S \Phi(z_{1}) - F e^{-r(T-t)} \Phi(z_{2}),$$ where $$z_{1} = \frac{\ln(\frac{S}{F}) + (\color{red}{-r} + \frac{1}{2}\sigma^{2})(T-t)}{\sigma \sqrt{T-t}}$$ and $$z_{2} = \frac{\ln(\frac{S}{F}) + (\color{red}{-r} - \frac{1}{2}\sigma^{2})(T-t)}{\sigma \sqrt{T-t}}.$$ I can't figure out why I am getting a $$\color{Red}{\text{negative rate}}$$; any suggestions or hunches are much appreciated. • It is hard to understand your question. For example, from where your un-change of variables should be applied. Do not assume everyone have your context. – Gordon Apr 16 at 18:05
2021-07-23 23:18:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.765163779258728, "perplexity": 413.4893635412739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.51/warc/CC-MAIN-20210723210216-20210724000216-00360.warc.gz"}
https://deltasdnd.blogspot.com/2011/03/shillings-and-pounds-were-not-coins.html?showComment=1300712211916
Wednesday, March 16, 2011 Shillings and Pounds Were Not Coins In regards to archaic values like shillings and pounds in the Middle Ages, the term of art used by experts and academics is to say that they were "moneys-of-account". But what does that mean, exactly? Let's be very clear, it indicates this: Shillings and pounds were not coins; they were not paper banknotes; they weren't anything physical that you could hold in your hand or carry around whatsoever. They were purely abstract counting units. Here's an analogy from the current day -- we frequently speak of a "Grand" and know that this indicates a value of one thousand dollars. "This computer cost two grand"; "You could buy that used car for ten grand"; "We spent thirty grand on the wedding". (Or perhaps you prefer a "G" or a "K" or a "dime". ) But of course, there is no "grand" physical money, like a coin or a banknote. We know that it's a counting concept, separate and distinct from the legal notes that we carry around with us. In principle, you could walk into a dealership and pay for a car in cash, and you'd fork over a wad of one hundred $100 bills (or something). You can even write in your ledger something like "10K" and conveniently record the transaction. * So the same is true with shillings and pounds in a medieval context. When Charlemagne first established the 1:20:240 money system (denier/sou/livre, or penny/shilling/pound; around 800 A.D.), only the smallest "deniers" were actually minted (each 1/240th of a pound weight of silver metal); no other coins existed. Accounting books were kept, recording prices and purchases in shillings/pounds, but they weren't coins that you could carry around with you and hand over to a merchant. And so it was throughout the entire Middle Ages; although a wide variety of silver and gold coins came into use, they weren't ever so large as to be worth an entire shilling or pound. For example, the first coin I can find that was worth "one pound value" is the English Gold Sovereign, first minted in 1489 (which I'll point out counts as being after the end of the Middle Ages). And even this was meant as bullion only (it had no numerical value stamped on it). As one site on gold coins says: "The First One Pound Coin: The gold sovereign came into existence in 1489 under King Henry VII... The pound sterling had been a unit of account for centuries, as had the mark. Now for the first time a coin denomination was issued with a value of one pound sterling." Or here (University of London Institute of Historical Research): "Values in the treasure were calculated in pounds, shillings and pence... although there were no coins equal to pounds and shillings and would not be until Henry VII's reign." Or this (Gies, Life in a Medieval City, p. 99): "The livre (pound) and sou (shilling), though used to count with throughout Europe, do not yet actually exist as coins." So, in summary -- "Moneys-of-account" actually means "Counting units of value that did not physically exist in any form". Nobody anywhere ever minted a gold coin worth 240 of their smallest coins, until almost the 16th century (and therefore it would be ahistorical, and present many problems, if we presented such a system in D&D). Can you find any examples to the contrary? (I've mentioned this in passing in prior money blogs, but wanted to highlight it on its own here.) * The U.S. government did print$1,000 bills at one time, but they were discontinued in 1945, and have not been legal tender for several decades. 1. This is fascinating. Thanks for sharing your info on this topic! In the last D&D game I ran I ditched all the cp/sp/gp at chargen, let everyone choose 5 starting items from a list, and one item was "pouch with 20 silver coins". Everything else in the adventure was a silver coin, or a valuable item (eg. gold candlestick etc) After reading this I think I might make this standard. :) 2. Again, great post. I think I might go with Stuart's idea, as well.... 3. Clearly I need to abandon all the pound coins in my campaign in favor of gold marks. Just kidding. I really like Stuart's idea of putting a bag o' coins on the list of starting items you can pick from. That easily eliminates a step in the chargen process. I'm all for streamlining chargen. 4. This is just one more reason why I tend to run my fantasy games in a broadly Arabian nights setting rather than a European medieval one. I find that my players tend to start from a more common baseline of assumptions and then its up to us jointly to decide what else the world is, without it all defaulting to a kind of flavourless pseudo-"modernity." Also, drams and dinars! And at least some people traveling clear across the known world and bringing back stories. 5. Here's the list of starting items I had for our game. I also changed "Standard Rations" to "Cheese". :) I saw someone used "silver pennies" and I think anything where you name the coins is a nice way to start establishing the setting through character creation. http://tao-dnd.blogspot.com/ 7. I blew the 1:20:200 (240) ratio of copper/silver/gold on the first night of the campaign. Said it was 1:10:100. No one seems to care. Word verification costsc - apt 8. I was at the Penn Museum at University of Pennsylvania a couple of weekends ago, and they have a display with lots of different ancient coins, in excellent condition. A number of them are electrum, which I'd never seen specimens of before. They also have a few 'billon' coins, which are an alloy of silver and copper or bronze. 9. It seems to me that the sort of accounting that players do on their character sheets is analogous to the use of historical units of account. So in my silver-standard game I deal with large numbers of silver coins as pounds sterling, with 100 coins to a lb. This interfaces well with a lb-based encumbrance system, and with treasure generation in 1000s of coins (I just divide by 1000 and read the result as lbs). So I'm fudging the difference between tower pounds and avoirdupois pounds, and decimalized and predecimalized pounds sterling, but...I'm ok with that. 10. Ten silver pieces = £1 (ie, 1lb of silver). Forget about specific coins completely and just deal in weight. How many D&D characters care about amounts less than 10sp anyway? 11. Well... (re: Alexander and Nagora) those would probably be issues more on-topic for the other money blogs I had in the past. You probably already know what I actually do in my games. 12. An interesting fact to mention, is that the lack of copper coins made everyday dealings a bit cumbersome. A silver piece would buy you around eight to ten pints of ale, so most items of this low value was dealt with either bye exchange of goods or bye trust (i.e. recordkeeping). Source: Lectures with prof. Richard Holt, Uni. of Tromsø. 13. CM: I guess that would make sense. Of course, what you really mean is that a real-world penny would buy about 8 pints (1 gallon of ale). (Agreed?) And my argument is always that that has the same status as the D&D copper piece. What D&D means by a "silver piece" has to be larger than that, like a sterling silver Groat (4 pence). As per prior money blogs. 14. Very interesting. Singapore has only just stopped printing SGD10,000 dollar notes as part of their legal tender, largely because of pressure from the USA because of how such a large note facilitated money laundering and drug trafficking. Anyway, fascinating piece of history. Thanks for posting. 1. Wow, to me that's interesting news. Thanks for the comment! 15. >Nobody anywhere ever minted a gold coin worth 240 of their smallest coins, until almost the 16th century (and therefore it would be ahistorical, and present many problems, if we presented such a system in D&D). Can you find any examples to the contrary? Roman gold coins were worth more than 240x their lowest bronze or copper denomination in many periods. e.g. 1 aurum = 25 denarii = 400 asses as of 140 BC. A Byzantine gold solidus/bezant/nomisma was worth thousands of bronze nummi. Single-nummus coins were struck at first, but even when they fell out of favour the larger 5, 10, 20, or 40-nummus coins would also qualify (40 being borderline.) These would be during the late Roman/early Medieval period. The later Byzantine gold hyperpyron was worth 864 copper tetartera - eventually falling as low as 288 tertartera due to debasement - during the Komnenian dynasty. Several ancient Chinese currencies would match your challenge, but generally not super stable ones, not coin-shaped, or both. e.g. Xin dynasty coinage. 16. Double-checking, I realized you neglected farthings! 1 silver farthing = 4 pence, debuted c. 1200. In the UK you then had the florin ( = 6s = 72d = 288 copper farthings) and noble ( = 6s 8d = 81d = 324 farthings) in 1344, which would technically be the first time England minted coins more than 244x their lowest denomination. In 1464 the noble was raised to 8s 4d = 100d = 400 farthings. Still, at the end of the day all of this is nitpicking. If anything it reinforces that a sp standard is eminently realistic. 1. Thanks for checking on that. I might have been a bit sloppy in phrasing that question, in that I think the whole post was meant in the context of "in the Middle Ages" (so this second point is particularly helpful). I'll point out that in later writings by Gygax (Gord novels, Tharizdun module) who posits bronze pieces at a value 1 cp = 4 bp (and farthings were bronze in the early 20th c.) So likely Gygax had a semi-murky mental model of farthings = bp, pennies = cp, shillings = sp, pounds = gp.
2021-10-18 20:52:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34003037214279175, "perplexity": 3414.569372962274}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00049.warc.gz"}
http://fr.opennmt.net/OpenNMT-tf/package/opennmt.models.sequence_to_sequence.html
# opennmt.models.sequence_to_sequence module¶ Standard sequence-to-sequence model. opennmt.models.sequence_to_sequence.shift_target_sequence(inputter, data)[source] Prepares shifted target sequences. Given a target sequence a b c, the decoder input should be a b c and the output should be a b c for the dynamic decoding to start on and stop on . Parameters: inputter – The opennmt.inputters.inputter.Inputter that processed data. data – A dict of tf.Tensor containing ids and length keys. The updated data dictionary with ids the sequence prefixed with the start token id and ids_out the sequence suffixed with the end token id. Additionally, the length is increased by 1 to reflect the added token on both sequences. class opennmt.models.sequence_to_sequence.EmbeddingsSharingLevel[source] Bases: object Level of embeddings sharing. Possible values are: • NONE: no sharing (default) • SOURCE_TARGET_INPUT: share source and target word embeddings • TARGET: share target word embeddings and softmax weights • ALL: share words embeddings and softmax weights NONE = 0 SOURCE_TARGET_INPUT = 1 TARGET = 2 ALL = 3 static share_input_embeddings(level)[source] Returns True if input embeddings should be shared at level. static share_target_embeddings(level)[source] Returns True if target embeddings should be shared at level. class opennmt.models.sequence_to_sequence.SequenceToSequence(source_inputter, target_inputter, encoder, decoder, share_embeddings=0, alignment_file_key='train_alignments', daisy_chain_variables=False, name='seq2seq')[source] A sequence to sequence model. __init__(source_inputter, target_inputter, encoder, decoder, share_embeddings=0, alignment_file_key='train_alignments', daisy_chain_variables=False, name='seq2seq')[source] Initializes a sequence-to-sequence model. Parameters: source_inputter – A opennmt.inputters.inputter.Inputter to process the source data. target_inputter – A opennmt.inputters.inputter.Inputter to process the target data. Currently, only the opennmt.inputters.text_inputter.WordEmbedder is supported. encoder – A opennmt.encoders.encoder.Encoder to encode the source. decoder – A opennmt.decoders.decoder.Decoder to decode the target. share_embeddings – Level of embeddings sharing, see opennmt.models.sequence_to_sequence.EmbeddingsSharingLevel for possible values. alignment_file_key – The data configuration key of the training alignment file to support guided alignment. daisy_chain_variables – If True, copy variables in a daisy chain between devices for this model. Not compatible with RNN based models. name – The name of this model. TypeError – if target_inputter is not a opennmt.inputters.text_inputter.WordEmbedder (same for source_inputter when embeddings sharing is enabled) or if source_inputter and target_inputter do not have the same dtype. auto_config(num_devices=1)[source] Returns automatic configuration values specific to this model. Parameters: num_devices – The number of devices used for the training. A partial training configuration. compute_loss(outputs, labels, training=True, params=None)[source] Computes the loss. Parameters: outputs – The model outputs (usually unscaled probabilities). labels – The dict of labels tf.Tensor. training – Compute training loss. params – A dictionary of hyperparameters. The loss or a tuple containing the computed loss and the loss to display. print_prediction(prediction, params=None, stream=None)[source] Prints the model prediction. Parameters: prediction – The evaluated prediction. params – (optional) Dictionary of formatting parameters. stream – (optional) The stream to print to. class opennmt.models.sequence_to_sequence.SequenceToSequenceInputter(features_inputter, labels_inputter, share_parameters=False, alignment_file_key=None)[source] A custom opennmt.inputters.inputter.ExampleInputter that possibly injects alignment information during training. initialize(metadata, asset_dir=None, asset_prefix='')[source] Initializes the inputter. Parameters: metadata – A dictionary containing additional metadata set by the user. asset_dir – The directory where assets can be written. If None, no assets are returned. asset_prefix – The prefix to attach to assets filename. A dictionary containing additional assets used by the inputter. make_dataset(data_file, training=None)[source] Creates the base dataset required by this inputter. Parameters: data_file – The data file. training – Run in training mode. A tf.data.Dataset. make_features(element=None, features=None, training=None)[source] Creates features from data. Parameters: element – An element from the dataset. features – An optional dictionary of features to augment. training – Run in training mode. A dictionary of tf.Tensor. opennmt.models.sequence_to_sequence.alignment_matrix_from_pharaoh(alignment_line, source_length, target_length, dtype=tf.float32)[source] Parse Pharaoh alignments into an alignment matrix. Parameters: alignment_line – A string tf.Tensor in the Pharaoh format. source_length – The length of the source sentence, without special symbols. The length of the target sentence, without special symbols. (target_length) – dtype – The output matrix dtype. Defaults to tf.float32 for convenience when computing the guided alignment loss. The alignment matrix as a 2-D tf.Tensor of type dtype and shape [target_length, source_length], where [i, j] = 1 if the i th target word is aligned with the j th source word. opennmt.models.sequence_to_sequence.guided_alignment_cost(attention_probs, gold_alignment, sequence_length, guided_alignment_type, guided_alignment_weight=1)[source] Computes the guided alignment cost. Parameters: attention_probs – The attention probabilities, a float tf.Tensor of shape $$[B, T_t, T_s]$$. gold_alignment – The true alignment matrix, a float tf.Tensor of shape $$[B, T_t, T_s]$$. sequence_length – The length of each sequence. guided_alignment_type – The type of guided alignment cost function to compute (can be: ce, mse). guided_alignment_weight – The weight applied to the guided alignment cost. The guided alignment cost. opennmt.models.sequence_to_sequence.align_tokens_from_attention(tokens, attention)[source] Returns aligned tokens from the attention. Parameters: tokens – The tokens on which the attention is applied as a string tf.Tensor of shape $$[B, T_s]$$. attention – The attention vector of shape $$[B, T_t, T_s]$$. The aligned tokens as a string tf.Tensor of shape $$[B, T_t]$$. opennmt.models.sequence_to_sequence.replace_unknown_target(target_tokens, source_tokens, attention, unknown_token='')[source] Replaces all target unknown tokens by the source token with the highest attention. Parameters: target_tokens – A a string tf.Tensor of shape $$[B, T_t]$$. source_tokens – A a string tf.Tensor of shape $$[B, T_s]$$. attention – The attention vector of shape $$[B, T_t, T_s]$$. unknown_token – The target token to replace. A string tf.Tensor with the same shape and type as target_tokens but will all instances of unknown_token replaced by the aligned source token.
2019-07-20 20:46:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2652292549610138, "perplexity": 11260.19113649255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526670.1/warc/CC-MAIN-20190720194009-20190720220009-00031.warc.gz"}
http://wmatem.eis.uva.es/scicade2013/?pc=13.3&ck=64
International Conference on Scientific Computation and Differential Equations # Invited Talk ### Collocation for Singular BVPs in ODEs with Unsmooth Data E.B. Weinmueller, I. Rachunkova and J. Vampolova Abstract We deal with BVPs for systems of ODEs with singularities. Typically, such problems have the form \begin{eqnarray*} &&z'(t)=\frac{M(t)}{t}z(t)+f(t,z(t)), \enspace t\in (0,1], B_0z(0)+B_1z(1)=\beta,\end{eqnarray*} where $B_0$ and $B_1$ are constant matrices which are subject to certain restrictions for a well-posed problem. Here, we focus on the linear case where the function $f$ is unsmooth, $f(t)=g(t)/t$. We first deal with the analytical properties of the problem - existence and uniqueness of smooth solutions. To solve the problem numerically, we apply polynomial collocation and for the linear IVPs, we are able to provide the convergence analysis. It turns out that the collocation retains its high order even in case of singularities, provided that the analytical solution is sufficiently smooth. We illustrate the theory by numerical experiments; the related tests were carried out using the {Matlab} code sbvp []. Bibliography [1] W. Auzinger, G. Kneisl, O. Koch and E.B. Weinmüller, A Collocation Code for Boundary Value Problems in Ordinary Differential Equations, Numer. Algorithms, 33(2003), pp. 27-39. Organized by
2018-02-25 02:04:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.76549232006073, "perplexity": 900.3569045577436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816083.98/warc/CC-MAIN-20180225011315-20180225031315-00702.warc.gz"}
https://opendayz.net/threads/how-to-using-firedaemon-w-dayz-extras.7441/
# [How-To]Using FireDaemon w/ DayZ + Extras #### Ryahn ##### New Member DISCLAIMER: This is for dedicated servers only. Not managed servers. DISCLAIMER: All code is presented "as is". If you modify it outside of what is specified in this thread without knowledge of what your doing. I am not held liable for any data loss. There was another thread floating around here somewhere that showed how to setup this process. The only issue with this, was that the download links don't work anymore. So I am making a new one that will be here. For those will want to know what I am using for the private server I help manage. Here is a list of what I use: • Pwnoz0r's DayZ Build • FireDaemon Pro • FireDaemon Fusion • BEC (Battleye Extended Controls) • Whitelister - By Guidez • Custom Log Rotation Script • MySQL Installed as a service • Spawn Script • Dart - Rcon • MySQLBackupFTP - Do MySQL Backups • Windows Server 2008 So Lets get started STEP 1 New You will click on the Plus symbol on the top left. STEP 2 First Part You will then need to fill out all the info that has text in it. I will go in detail below Short Name and Display Name This will be what you see in the FireDaemon Manager and in the Service List Custom Prefix String This will add a custom name to the service name Description When you look at the Services you will see a brief description of what it is Executable This will be to where the arma2oaserver.exe is located. You should use the one located in the Expansions\beta\ folder Working Directory You will have to edit this after you add the executable (exe). It will be left to where the @hive, @bliss or any @folder is located Parameters This will be where you add all the startup parameters for the server Default Pwn DayZ Build Code: -mod=@hive;@dayz -name=cfgdayz -config=cfgdayz\server.cfg -cfg=cfgdayz\arma2.cfg -profiles=cfgdayz Start-Up Mode Should be left automatic Start-Up Time Left at 3000 ms (3 sec) Make sure you check the box for "Enabled Debugging". The reason for this, is to allow you to be able to see if there was any issues when it tried to start anything. It makes it easier to troubleshoot. Debug Log File You will need to save this to where ever you want. You can name the file whatever you want, but its easier to call debug. All the rest of the settings stay the way they are. STEP 5 Dependencies Here you will define all the dependencies that the server needs. You can add whatever you want but it must be a service. If you do not have MySQL installed as a service, I would advise you to do so. I used XAMPP to install MySQL and Apache, but it can be used to run just the MySQL server. #### Ryahn ##### New Member STEP 6 Pre/Post-Services Click Insert Executable Path to either a batch (.bat) or executable (.exe)​ Working Directory This should be left to whatever is there. But you can change it to where you want.​ Parameters Just like when you set up the server section in STEP 2. You can add start up parameters here.​ Execution Time This will be how long it will take for the program to run​ Run Program Leave this default​ Run Detached If the program needs to run after the server starts. Check this box, otherwise, leave it unchecked​ This section will get a little difficult for some. If you need any help, feel free to ask. There is no such thing as a dumb question (within moderation ). I will share all the scripts I use and what they do. As I do use Pwnoz0r's DayZ Server and all my scripts are made for that. I would be willing to experiment on using Bliss. STEP 6.1 Pre-Start Script​ You will need a pre-start script to run in the Pre-Services. This script will kill whitelister (if your using it) and BEC (if it didnt already die).​ Create a file called pre-start.bat and put this in it​ @echo off echo Pre-Start Operation REM Kills Whitelister C:\xampp\mysql\cecho Whitelist Service killed... {0A} OK.{07} REM Kill BEC if running C:\xampp\mysql\cecho Bec killed... {0A} OK.{07} timeout /T 3 exit The cecho is a file that is in the MySQL folder that comes with Pwn's build. I just copied this into the location of where mysql is. Make sure to change the file locations to your respective paths.​ STEP 6.2 Log Rotation Script​ Second you will want a log rotation script. This will rotate all the log files including that HiveExt.log . There are 3 files that get huge and I mean huge: arma2oaserver.RPT, HiveExt.log and scripts.log. The rotation script will archive the files into respective folders with timestamps.​ Create a new file called logrotation.bat and put this in side of it​ @echo off :start ::ONLY CHANGE THE FILE LOCATIONS HERE set cfg="C:\Program Files (x86)\Steam\steamapps\common\Arma 2 Operation Arrowhead\cfgdayz" set logs="C:\Program Files (x86)\Steam\steamapps\common\Arma 2 Operation Arrowhead\cfgdayz\BattlEye" set white="C:\Program Files (x86)\Steam\steamapps\common\Arma 2 Operation Arrowhead\utils\whitelist" ::END USER EDIT ::ONLY EDIT IF YOU KNOW WHAT YOUR DOING SET timestamp=%date:~10,4%%date:~4,2%%date:~7,2%_%time:~0,2%%time:~3,2% mkdir %logs%\ArchiveLogs\%timestamp% ::All Log Files in cfgdayz if exist {%cfg%\arma2oaserver.RPT} ( copy %cfg%\arma2oaserver.RPT %logs%\ArchiveLogs\%timestamp%\arma2oaserver.RPT del /Q /F %cfg%\arma2oaserver.RPT ) else ( goto 1 ) :1 if exist {%cfg%\HiveExt.log} ( copy %cfg%\HiveExt.log %logs%\ArchiveLogs\%timestamp%\HiveExt.log del /Q /F %cfg%\HiveExt.log ) else ( goto 2 ) :2 if exist {%cfg%\server_console.log} ( copy %cfg%\server_console.log %logs%\ArchiveLogs\%timestamp%\server_console.log del /Q /F %cfg%\server_console.log ) else ( goto 3 ) ::All Files in Battleye :3 if exist {%logs%\createvehicle.log} ( copy %logs%\createvehicle.log %logs%\ArchiveLogs\%timestamp%\createvehicle.log del /Q /F %logs%\createvehicle.log ) else ( goto 4 ) :4 if exist {%logs%\mpeventhandler.log} ( copy %logs%\mpeventhandler.log %logs%\ArchiveLogs\%timestamp%\mpeventhandler.log del /Q /F %logs%\mpeventhandler.log ) else ( goto 5 ) :5 if exist {%logs%\publicvariable.log} ( copy %logs%\publicvariable.log %logs%\ArchiveLogs\%timestamp%\publicvariable.log del /Q /F %logs%\publicvariable.log ) else ( goto 6 ) :6 if exist {%logs%\publicvariableval.log} ( copy %logs%\publicvariableval.log %logs%\ArchiveLogs\%timestamp%\publicvariableval.log del /Q /F %logs%\publicvariableval.log ) else ( goto 7 ) :7 if exist (%logs%\remoteexec.log} ( copy %logs%\remoteexec.log %logs%\ArchiveLogs\%timestamp%\remoteexec.log del /Q /F %logs%\remoteexec.log ) else ( goto 8 ) :8 if exist {%logs%\scripts.log} ( copy %logs%\scripts.log %logs%\ArchiveLogs\%timestamp%\scripts.log del /Q /F %logs%\scripts.log ) else ( goto 9 ) :9 if exist {%logs%\setdamage.log} ( copy %logs%\setdamage.log %logs%\ArchiveLogs\%timestamp%\setdamage.log del /Q /F %logs%\setdamage.log ) else ( goto 10 ) :10 if exist {%logs%\setpos.log} ( copy %logs%\setpos.log %logs%\ArchiveLogs\%timestamp%\setpos.log del /Q /F %logs%\setpos.log ) else ( goto 11 ) :11 if exist {%logs%\setpos.log} ( copy %white%\console.log %white%\ArchiveLogs\%timestamp%\console.log del /Q /F %white%\console.log ) else ( goto end ) :end echo All Files Copied timeout /T 3 exit STEP 6.3 Spawn Script​ By default, the start batch file for Pwn's build does the spawning. So I have remedied this with another batch file to just spawns.​ Create a new file called spawn.bat and put this inside.​ @echo off echo Executing spawn script... ::Change to mysql install path cd C:\xampp\mysql\bin mysql --user=root --password=cybernations --host=127.0.0.1 --port=3306 --database=hivemind --execute="call pSpawnVehicles()" ping 127.0.0.1 -n 5 >NUL C:\xampp\mysql\cecho {0A}OK.{07} timeout /T 3 exit I use Doc's database fix so it doesn't use pMain(). If you don't use his fix, I highly suggest you do on Pwn's build. If not just change where "call pSpawnVehicles()" to "call pMain()"​ STEP 6.4 BEC​ In the above image, you will need to add -f Config.cfg in the Parameters field. To save you some hassle with this, it would be good idea to make a batch file. Also along with this, you will need to make sure that Run Detached is checked.​ To make BEC start after the server starts. You will need to make a batch file. Adjust the timeout (in seconds) to how long it takes for the server to start.​ Make a file called bec.bat in the folder where Bec.exe is.​ @echo off timeout /T 22 start Bec.exe -f Config.cfg exit STEP 6.5 Whitelister​ Just like in BEC setup in STEP 6.4. You will need to run detached but you wont need any parameters. Whitelister doesn't seem to have a problem running before the server starts. It only throws out an error that it couldn't connect to the server. But if you want it to start after the server starts, do the same in BEC's step but change it around.​ Make a file calle whitelist.bat in the folder where Whitelister.exe is.​ @echo off timeout /T 22 start Whitelister.exe exit STEP 7 Process Log NOTE: This step is not necessary unless you like to see what is going on. You will need to check the box Enabled and then check the box Log File. Once you have done this, save it to where the debug.log is and name this file called process. Leave all the other settings at default. Sorry for the wall of text, but its a great program to invest time to get to know. If you would like to be able to start your DayZ server without having to be on the dedicated box. You can just use an RCON program to issue a shutdown command to the server. Or you could log in game as admin and use #shutdown. You may also use FireDaemon Fusion. This a webserver that is for FireDaemon to allow you to remote manage it via a web browser. You can also add more users with certain rights. This method can also be used to restart the DayZ server or any other service running the dedicated server. #### Yshido ##### Member Very good howto and a nice tool too. I tried it months ago. Unfortunetaly you have to buy FireDeamon after a 30 day trial period and multiserver support is pain in the but (well at least for me). Thats y i decided to not use it. Nevertheless..... good tool. #### Ryahn ##### New Member Very good howto and a nice tool too. I tried it months ago. Unfortunetaly you have to buy FireDeamon after a 30 day trial period and multiserver support is pain in the but (well at least for me). Thats y i decided to not use it. Nevertheless..... good tool. That is why I found a torrented version. #### hambeast ##### Well-Known Member That is why I found a torrented version. Was going to say good work until you admitted to pirating software. It costs $30 and is well worth the money. If you can't afford$30 what are you doing renting a dedicated server/vps? #### Ryahn ##### New Member Was going to say good work until you admitted to pirating software. It costs $30 and is well worth the money. If you can't afford$30 what are you doing renting a dedicated server/vps? Its not $30, its$50. I have never used FireDaemon before and didnt want to risk losing out on that much. Most if not all the software I do have is paid for in full after trying it out through a torrent. #### hambeast ##### Well-Known Member Its not $30, its$50. I have never used FireDaemon before and didnt want to risk losing out on that much. Most if not all the software I do have is paid for in full after trying it out through a torrent. They have a 30 day full featured free trial. That is what I downloaded. Legally. I liked the software so much I decided to keep using it and paid full price for it. Keep telling yourself you didn't steal that software. Whatever helps you sleep at night. #### Sercanatici ##### Member its resetting the deadbodies, so if i cant give people their stuff back after they have died? (iam using navicat) if you could join me on skype sercan.atici i would be greatfull #### Shaun Grogan ##### New Member I can't keep bec running. I acts like it is going to start up but then closes. I made the batch file as you recommended @Echo off timeout /T 240 Bec.exe -f Napf.cfg --dsc exit running the batch file manually has the same results. You get the splash screen and the bec console for a second then it closes. Anyone have any suggestions? Incidently, it runs fine logged in via rdp or the console #### alexlawson ##### OpenDayZ Rockstar! Is this any good because I would install it but want to know if its stable before I test it out.
2019-03-19 00:09:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2679428160190582, "perplexity": 6497.051506085574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201812.2/warc/CC-MAIN-20190318232014-20190319014014-00128.warc.gz"}
https://math.stackexchange.com/questions/3609409/step-3-1987-q6-simultanious-differential-equations
# STEP 3 1987 Q6: Simultanious Differential Equations I have been doing some of the old STEP papers because they seem to be more challenging and I stumbled across a problem I am not too sure about as I am not very experienced with second-order differential equations. The question is as follows, solve the simultaneous differential equations $$\frac{dy}{dt}+2x-5y=0$$ and $$\frac{dx}{dt} + x -2y=2\cos t$$. I solved for $$x$$ in the first equation afterwards I differentiated it and plugged it back into the second equation to get $$\frac{d^2y}{dx^2}+y=4\cos t - 2\sin t$$. I can get a solution for the complementary function, however, when I try to get a solution for the particular integral I can't seem to get one. I first tried substituting $$y=p\cos t +q\sin t$$ which did not work (as there were similar terms in the complementary solution). Then I tried $$y=tp\cos t +tq\sin t$$ which resulted in the following equation $$(-2p + q)\sin t+(2q+p)\cos t -t(p\cos t- q\sin t)=4\cos t-2\sin t$$. From here I don't know how to proceed. I thought of matching coefficients but this means that $$p\cos t - q\sin t=0$$ and that can't be true for all $$t$$ and constant values of $$p$$ and $$q$$. Any help would be greatly appreciated! Thanks in advance. • You should double-check your calculations with $y=t(p\cos{t}+q\sin{t})$. When computing $y’’+y$: the term with two derivatives of $t$ vanish; the term with no derivative of $t$ is compensated by $y$, so only $2(-p\sin{t}+q\cos{t})$ remains to be equated to the RHS. – Mindlack Apr 4 at 13:37 • Thank you very much, I just realised my mistake. Could you post it as an answer so I can mark it as answered whilst giving you some virtual points? – Maths Wizzard Apr 4 at 13:48 You should double-check your calculations with $$y=t(p\cos{t}+q\sin{t})$$. When computing $$y’’+y$$: the term with two derivatives of $$t$$ vanishes; the term with no derivative of $$t$$ is compensated by $$y$$, so only $$2(-p\sin{t}+q\cos{t})$$ remains to be equated to the RHS.
2020-09-24 13:06:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411505818367004, "perplexity": 104.61380995600696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400217623.41/warc/CC-MAIN-20200924100829-20200924130829-00757.warc.gz"}
https://www.monroecc.edu/faculty/paulseeburger/calcnsf/CalcPlot3D/CalcPlot3D-Help/section-functions-fxy.html
## Section2.1Functions of Two-Variables ### Subsection2.1.1Functions in the form $z = f(x, y)$ This is the type of function object that is provided by CalcPlot3D when it opens. But to add a new one, just select the first option Function: z = f(x,y) on the Add to graph drop-down menu, located just below the control panel toolbar buttons. See an image of this object's dialog below. • You can enter any function of $x$ and $y$ that you wish or select functions from the drop-down menu. See the list of available functions in Appendix A. • When you are ready to plot it, either select the check box on the top left of the object dialog, press enter on the function you typed in the textbox, or click the Graph button. • The range of values for $x$ and $y$ that is just below the function textbox determines the restricted domain (a rectangle) for this function. Note that unless you change this, it will automatically update to align with the restricted domain of the view window in the plot. So if you were to change these values to something other than the ranges for the $x$- and $y$-axes in the Format Axes dialog (see how to use the Format Axes dialog in Section 1.4), this function would keep its independent domain even when you zoom in and out using the respective toolbar buttons. To realign this function with the axes in the plot, simply adjust the ranges for $x$ and $y$ for this function to match those in the plot (and Format Axes dialog). • You can vary the number of gridlines used to make the mesh finer or wider. The larger the number of gridlines used, the closer the surface will get to the true graph of the function, but also the more polygons it will take to create it, thus slowing down the plot and rotation process. Note that you can actually enter two numbers separated by a comma to specify the number of gridlines in both the $x$- and $y$-directions. So you could enter, for example, 10, 15. This would create this surface plot using 10 gridlines in the $x$-direction and 15 gridlines in the $y$-direction. When only one number is used, the same number of gridlines is used in both directions. • Use the contour plot button to create a contour plot for this surface. For more details on Contour Plots see Section 5.5. • Use the settings button ⚙$\faGear$ to adjust surface settings. For a discussion of the options, see Section 2.11. ### Subsection2.1.2Functions in the form $x = f(y, z)$ To add a function of this type select the option Function: x = f(y, z) on the Add to graph drop-down menu, located just below the control panel toolbar buttons. It's near the bottom of the list. I find this object especially helpful for creating vertical planes to cut a surface defined by a function of $x$ and $y\text{.}$ The options are almost identical to those of the function of $x$ and $y$ described above, although there is no option to create a contour plot. ### Subsection2.1.3Functions in the form $y = f(x, z)$ To add a function of this type select the option Function: y = f(x, z) on the Add to graph drop-down menu, located just below the control panel toolbar buttons. It's near the bottom of the list. This object is also helpful for creating vertical planes to cut a surface defined by a function of $x$ and $y\text{.}$ The options are again almost identical to those of the function of $x$ and $y$ described above, although there is no option to create a contour plot. ### Subsection2.1.4Functions in Cylindrical Coordinates Only one of these is implemented so far, although note that any cylindrical or spherical function can already be graphed using an Implicit Surface. See Section 2.7. • Functions of the form $r = f(\theta, z)$ To add a function of this type select the option Function: r = f(θ, z) on the Add to graph drop-down menu, located just below the control panel toolbar buttons. To enter $\theta$ in your functions, you can either enter theta or click the $\theta$ button in the middle of the range for theta. Note that the ranges for $\theta$ and $z$ will not update for this type of function when you zoom in or out using the zooming buttons. The number of steps in each domain is important in determining the resolution of the cylindrical surface. ### Subsection2.1.5Functions in Spherical Coordinates These are not yet implemented in CalcPlot3D. But you can plot these functions already using an Implicit Surface. See Section 2.7.
2021-05-06 18:24:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4650111794471741, "perplexity": 608.7528822115557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00571.warc.gz"}
https://plainmath.net/elementary-geometry/81139-i-ve-gotten-to-the-point-where-i-have-th
Sam Hardin 2022-07-04 I've gotten to the point where I have the following equation: $2{x}_{2}{x}_{m}-2{x}_{m}{x}_{1}+2{y}_{2}{y}_{m}-2{y}_{m}{y}_{1}={x}_{2}^{2}-{x}_{1}^{2}+{y}_{2}^{2}-{y}_{1}^{2}$ Would it be mathematically correct to split this into the following two equations $\left\{\begin{array}{l}2{x}_{2}{x}_{m}-2{x}_{m}{x}_{1}={x}_{2}^{2}-{x}_{1}^{2}\\ 2{y}_{2}{y}_{2}-2{y}_{m}{y}_{1}={y}_{2}^{2}-{y}_{1}^{2}\end{array}$ and treat them as a system of equations? If so, then how would I go about doing this? Mateo Carson Expert Step 1 You can not split this equation. That would be the same as saying $a+b=c+d$ can be split into $a=c$ and $b=d$ . If the points are ${P}_{1},{P}_{2},$ and ${P}_{m}$ , for ${P}_{m}$ to be the midpoint between ${P}_{1}$ and ${P}_{2}$ , then one way of writing this is that ${P}_{m}$ must be on the line between ${P}_{1}$ and ${P}_{2}$ (i.e., ${P}_{m}=t{P}_{1}+\left(1-t\right){P}_{2}$ for $0\le t\le 1$ ) and ${P}_{m}$ must be at the halfway position (i.e., $t=\frac{1}{2}$ ). Another way is that $|{P}_{1}-{P}_{m}|=|{P}_{2}-{P}_{m}|$ and $|{P}_{1}-{P}_{m}|=|{P}_{2}-{P}_{1}|/2$ . This way gives you two equations for the two unknowns of the x and y components of ${P}_{m}$ . Do you have a similar question?
2023-02-06 17:12:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 46, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398867845535278, "perplexity": 70.21775902841222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00671.warc.gz"}
http://mathhelpforum.com/latex-help/54633-howto-type-equations.html
Math Help - Howto type equations 1. Howto type equations how do you guys type equations so neatly and all of your things look the same what program are you all using $x = 0.5gt^2$ $ln(x) = ln(0.5gt^2)$ $ln(x) = ln(0.5g) + ln(t^2)$ $ln(x) = 2ln(t) +ln(0.5g)$ $a = v^2/r$ they never work for me ): OMGG it workedd!! is there any program where i can type it out like this on my computer? You can use Mathtype 6 go to Design Science: MathType - Equation Editor 3. Yea i have that program but it doesnt make it as nice as this site does 4. Originally Posted by treetheta Yea i have that program but it doesnt make it as nice as this site does Well you know that this site is using a LaTeX addin, the users guide is at this link. If you want to use this on your own machine you will need to install a LaTeX system (you will probably not want to do that) the details of how to do so can be found on the Art of Problem Solving site. Alternativly you could try AbiWord (with the equation editor and maths fonts installed) which uses LaTeX for its equation editor. There is also a LaTeX addin for OpenOffice but I have not had much joy with that. CB 5. Hi We can get that symbol and squares are getting from Mathtype 6 6. Originally Posted by johnkennedy We can get that symbol and squares are getting from Mathtype 6 You know that this is a statement and not a question don't you? Also I think we need more infromation about what symbol and/or squares you are refering to. CB 7. Originally Posted by treetheta how do you guys type equations so neatly and all of your things look the same what program are you all using $x = 0.5gt^2$ $ln(x) = ln(0.5gt^2)$ $ln(x) = ln(0.5g) + ln(t^2)$ $ln(x) = 2ln(t) +ln(0.5g)$ $a = v^2/r$ they never work for me ): OMGG it workedd!! is there any program where i can type it out like this on my computer? hi there is a way to type equations through equation editor options in Msword ok
2014-03-15 16:19:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6370135545730591, "perplexity": 1522.721294428145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678698356/warc/CC-MAIN-20140313024458-00079-ip-10-183-142-35.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/90367/reference-request-jacquet-module-and-asymptotic-of-matrix-coefficients
# Reference request - Jacquet module and asymptotic of matrix coefficients Hello, I would like to know some nice references about the relation between asymptotics of matrix coefficients of representations of reductive groups over local fields, and the pairing between the Jacquet module of the representation and the Jacquet module of its dual. I would like to know reference for the p-adic case, as well as for the real case (where one uses Jacquet-Casselmann functor instead of Jacquet functor). As I understand, both cases are due to Casselmann. Thank you, Sasha - Have you looked at Casselman's papers on this? There are four papers to look at, the three with "asymptotic" in their title and his original notes on representations of $p$-adic groups. math.ubc.ca/~cass/research.html You can also find Casselman's ICM paper, "Jacquet Modules for Real Reductive Groups", online. Wallach's book, "Real Reductive Groups I", also has material. –  B R Mar 6 '12 at 16:18 Thank you for the references! –  Sasha Mar 6 '12 at 19:11
2015-05-03 11:01:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084102272987366, "perplexity": 978.0426390871966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430448949947.62/warc/CC-MAIN-20150501025549-00033-ip-10-235-10-82.ec2.internal.warc.gz"}
https://en-academic.com/dic.nsf/enwiki/340269
# Fluorescence recovery after photobleaching Fluorescence recovery after photobleaching Fluorescence recovery after photobleaching (FRAP) denotes an optical technique capable of quantifying the two dimensional lateral diffusion of a molecularly thin film containing fluorescently labeled probes, or to examine single cells. This technique is very useful in biological studies of cell membrane diffusion and protein binding. In addition, surface deposition of a fluorescing phospholipid bilayer (or monolayer) allows the characterization of hydrophilic (or hydrophobic) surfaces in terms of surface structure and free energy. Similar, though less well known, techniques have been developed to investigate the 3-dimensional diffusion and binding of molecules inside the cell; they are also referred to as FRAP. Experimental Setup The basic apparatus comprises an optical microscope, a light source and some fluorescent probe. Fluorescent emission is contingent upon absorption of a specific optical wavelength or color which restricts the choice of lamps. Most commonly, a broad spectrum mercury or xenon source is used in conjunction with a color filter. The technique begins by saving a background image of the sample before photobleaching. Next, the light source is focused onto a small patch of the viewable area either by switching to a higher magnification microscope objective or with laser light of the appropriate wavelength. The fluorophores in this region receive high intensity illumination which causes their fluorescence lifetime to quickly elapse (limited to roughly 105 photons before extinction). Now the image in the microscope is that of a uniformly fluorescent field with a noticeable dark spot. As Brownian motion proceeds, the still-fluorescing probes will diffuse throughout the sample and replace the non-fluorescent probes in the bleached region. This diffusion proceeds in an ordered fashion, analytically determinable from the diffusion equation. Assuming a gaussian profile for the bleaching beam, the diffusion constant "D" can be simply calculated from: :$D = frac\left\{w^\left\{2\left\{4t_\left\{1/2$ where "w" is the width of the beam and "t1/2" is the time required for the bleach spot to recover half of its initial intensity. Applications upported Lipid Bilayers Originally, the FRAP technique was intended for use as a means to characterize the mobility of individual lipid molecules within a cell membrane. While providing great utility in this role, current research leans more toward investigation of artificial lipid membranes. Supported by hydrophilic or hydrophobic substrates (to produce lipid bilayers or monolayers respectively) and incorporating membrane proteins, these biomimetic structures are potentially useful as analytical devices for determining the identity of unknown substances, understanding cellular transduction, and identifying ligand binding sites. Protein Binding This technique is commonly used in conjunction with green fluorescent protein (GFP) fusion proteins, where the studied protein is fused to a GFP. When excited by a specific wavelength of light, the protein will fluoresce. When the protein that is being studied is produced with the GFP, then the fluorescence can be tracked. Photodestroying the GFP, and then watching the repopulation into the bleached area can reveal information about protein interaction partners, organelle continuity and protein trafficking. If after some time the fluorescence doesn't reach the initial level anymore, then some part of the fluorescence is caused by an immobile fraction (that cannot be replenished by diffusion). Similarly, if the fluorescent proteins bind to static cell receptors, the rate of recovery will be retarded by a factor related to the association and disassociation coefficients of binding. This observation has most recently been exploited to investigate protein binding. Applications Outside the Membrane FRAP can also be used to monitor proteins outside the membrane. After the protein of interest is made fluorescent, generally by expression as a GFP fusion protein, a confocal microscope is used to photobleach and monitor a region of the cytoplasm, mitotic spindle, nucleus, or another cellular structure. The mean fluorescence in the region can then be plotted versus time since the photobleaching, and the resulting curve can yield kinetic coefficients for the protein's binding reactions and/or the protein's diffusion coefficient in the medium where it is being monitored. The analysis is most simple when the curve is dominated by only the diffusional or only the binding components. For a circular bleach spot and diffusion-dominated recovery, the fluorescence is described by the Soumpasis equation and involves modified Bessel functions: :$f\left(t\right)=e^\left\{-h\right\}left\left(I_\left\{0\right\}\left(h\right)+I_\left\{1\right\}\left(h\right) ight\right)$ where "h"=r2/(2*Df*t); "r"=radius of bleach spot; "t"=time; "Df"=diffusion coefficient; "f(t)" is the normalized fluorescence (goes to 1 as t goes to infinity). For a binding-dominated reaction, in which the diffusion is much faster than the unbinding of the bleached protein and subsequent binding of unbleached protein, it is given by :$f\left(t\right)=1-B_\left\{eq\right\}e^\left\{-k_\left\{off\right\}t\right\}$ where "Beq" is the fraction of the protein that is bound to other structures in the photobleached region at equilibrium, and "koff" is the dissociation constant for the binding. Sometimes there are multiple binding states in which case there are just more exponential terms of the same form. Many FRAP recoveries are not dominated overwhelmingly by just diffusion or just binding, so their curves are more complex; FRAP recoveries are analyzed in much more detail in Sprague and Pego et al. (see References below). ee also *Fluorescence microscope *Photobleaching References *Sprague, B., R. Pego, et al. Analysis of Binding Reactions by Fluorescence Recovery after Photobleaching. Biophys. J. 2004 Jun; 86(6):3473-3495. * [http://www.bio.davidson.edu/Courses/Molbio/FRAPx/FRAP.html Davidson College FRAP site] Wikimedia Foundation. 2010. ### Look at other dictionaries: • Fluorescence Recovery after Photobleaching — Prinzip von FRAP FRAP (Fluorescence Recovery after Photobleaching) bezeichnet eine Methode aus der Mikrobiologie und Biophysik, die zur Messung der Diffusionsgeschwindigkeiten in Zellen und dünnen Flüssigkeitsfilmen dient. Sie kann z. B. zum… …   Deutsch Wikipedia • fluorescence recovery after photobleaching — (= FRAP) Many fluorochromes are bleached by exposure to exciting light. If, for example, the cell surface is labelled with a fluorescent probe and an area bleached by laser illumination, then the bleached patch that starts off as a dark area will …   Dictionary of molecular biology • Fluorescence loss in photobleaching — Fluorescence Loss in Photobleaching, or FLIP, is a technique in fluorescence microscopy which can be used to examine the movement or diffusion of molecules inside cells or membranes. Typically a cell membrane is labelled with a fluorescent dye,… …   Wikipedia • Fluorescence Loss in Photobleaching — Als Fluorescence Loss in Photobleaching (FLIP) bezeichnet man eine Methode der Mikrobiologie zum Nachweis der Fluidität von Biomembranen. Funktionsweise Dabei werden Oberflächenproteine auf Zellen mit fluoreszenz markierten Antikörpern… …   Deutsch Wikipedia • Fluorescence correlation spectroscopy — (FCS) is a common technique used by physicists, chemists, and biologists to experimentally characterize the dynamics of fluorescent species (e.g. single fluorescent dye molecules in nanostructured materials, autofluorescent proteins in living… …   Wikipedia • Redistribution de Fluorescence Après Photoblanchiment (FRAP) — Redistribution de fluorescence après photoblanchiment Sommaire 1 Définition 2 Principe 3 Applications 4 Remarques // …   Wikipédia en Français • Redistribution de fluorescence apres photoblanchiment — Redistribution de fluorescence après photoblanchiment Sommaire 1 Définition 2 Principe 3 Applications 4 Remarques // …   Wikipédia en Français • Redistribution de fluorescence après photoblanchiment — La redistribution de fluorescence après photoblanchiment (FRAP, fluorescence recovery after photobleaching) est une méthode utilisée en microscopie à fluorescence pour mesurer la vitesse de diffusion moléculaire. Elle peut être appliquée pour… …   Wikipédia en Français • FRAP — fluorescence recovery after photobleaching …   Medical dictionary • FRAP — • fluorescence recovery after photobleaching …   Dictionary of medical acronyms & abbreviations
2023-01-31 16:45:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5444535613059998, "perplexity": 7338.305200527676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00093.warc.gz"}
https://www.burcinkorkmaz.com/french-vanilla-tzvjco/850363-electricity-and-magnetism-physics
50 Lb Victor Dog Food, Pitney Bowes Universal Tracking, Pumpkin Spice Milkshake Recipe, Arctic Circle Temperature Today, Goldflame Spirea Care, Anchovy Paste Meaning, Us Mint Closed, Four Seasons Athens Residences, Ground Beef Sauce Without Tomatoes, " /> 50 Lb Victor Dog Food, Pitney Bowes Universal Tracking, Pumpkin Spice Milkshake Recipe, Arctic Circle Temperature Today, Goldflame Spirea Care, Anchovy Paste Meaning, Us Mint Closed, Four Seasons Athens Residences, Ground Beef Sauce Without Tomatoes, " /> ## electricity and magnetism physics The electromagnetic field potentials transform as follows when a gauge transformation is applied: $\left\{\begin{array}{l} \vec{A}'=\vec{A}-\nabla f\\ \displaystyle V'=V+\frac{\partial f}{\partial t} \end{array}\right.$. Courses Take a guided, problem-solving based approach to learning Electricity and Magnetism. 05. Freely browse and use OCW materials at your own pace. In an electrical circuit with only stationary currents, Kirchhoff’s equations apply: for a closed loop  : $$\sum I_n=0$$,  $$\sum V_n=\sum I_nR_n=0$$. In 1952 he shared the Nobel Prize for Physics for his independent discovery of nuclear magnetic resonance in liquids and in solids, an elegant and precise way of determining chemical structure and properties of materials which is widely used today. Dino is a graduate student in Physics whose main interest is in astronomy. (Image by Mark Bessette. High School Physics Help » Electricity and Magnetism Example Question #1 : Electricity And Magnetism Calculate the magnitude of the electric field at a point that is located directly north of a point charge, . » Physics II: Electricity and Magnetism, Magnet Levitating Above A Superconducting Ring: The image shows a permanent magnet levitating above a conducting non-magnetic ring with zero resistance. Download files for later. Along with David Griffiths' Introduction to Electrodynamics, the book is one of the most widely adopted undergraduate textbooks in electromagnetism. MIT OpenCourseWare makes the materials used in the teaching of almost all of MIT's subjects available on the Web, free of charge. The wave equations in matter, with $$c_{\rm mat}=(\varepsilon\mu)^{-1/2}$$ the lightspeed in matter, are: $\left(\nabla^2-\varepsilon\mu\frac{\partial^2 }{\partial t^2}-\frac{\mu}{\rho}\frac{\partial }{\partial t}\right)\vec{E}=0~,~~ \left(\nabla^2-\varepsilon\mu\frac{\partial^2 }{\partial t^2}-\frac{\mu}{\rho}\frac{\partial }{\partial t}\right)\vec{B}=0$. Electricity and Magnetism is a standard textbook in electromagnetism originally published by Nobel laureate Edward Mills Purcell in 1963. Made for sharing. The thermal voltage between two metals is given by: $$V=\gamma(T-T_0)$$. Like electricity, magnetism produces attraction and repulsion between objects. Physics Demos; Electricity and Magnetism; Electricity and Magnetism. Programs of Study. Ships from and sold by Books Unplugged. The stone… Electricity and Magnetism Heat and Thermodynamics Physical Optics Max Fairbairn's Planetary Photometry Integrals and Differential Equations: Electricity and Magnetism (last updated: 2020 April 17) Chapter 1. Electromagnetism is a branch of physical science that describes the interactions of electricity and magnetism, both as separate phenomena and as a singular electromagnetic force. If the medium has an ellipsoidal shape and one of the principal axes is parallel with the external field $$\vec{E}_0$$ or $$\vec{B}_0$$ then the depolarizing fields are homogeneous. Lesson Plan. Legal. AP Physics C: Electricity and Magnetism is a one-semester, calculus-based, college-level physics course, especially appropriate for students planning to specialize or major in one of the physical sciences or engineering. 01. Find Out More. This tutorial introduces electricity and magnetism in physics. Coulomb’s Law: Example 2 . Electricity is related to individual charges. The generated or removed heat is given by: $$W=\Pi_{xy}It$$. So actually, before I go into magnetic field, I actually want to make one huge distinction between magnetism and electrostatics. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. Physics Department Faculty, Lecturers, and Technical Staff, Boleslaw Wyslouch, Brian Wecht, Bruce Knuteson, Erik Katsavounidis, Gunther Roland, John Belcher, Joseph Formaggio, Peter Dourmashkin, and Robert Simcoe. Electricity and Magnetism (18 Lectures): Electric field and potential: The electric field E due to extended charge distributions; Integral and differential expressions relating the electric potential V to the E field; Potential due to a dipole and other extended charge distributions. The next course in the series is 8.07 Electromagnetism II. It generates these vectors around it, that if you put something in that field that can be affected by it, it'll be some net force acting on it. Electricity and Magnetism (Berkeley Physics Course, Vol. Download books for free. The subject is taught using the TEAL (Technology Enabled Active Learning) format which utilizes small group interaction and current technology. You don’t need to write this down. Over the years of teaching 8.022, I've developed a fairly complete set of lecture notes on electricity and magnetism. Physics - Physics - The study of electricity and magnetism: Although conceived of as distinct phenomena until the 19th century, electricity and magnetism are now known to be components of the unified field of electromagnetism. Here, the freedom remains to apply a gauge transformation. If the flux enclosed by a conductor changes this results in an induced voltage, $\displaystyle V_{\rm ind}=-N\frac{d\Phi}{dt}$. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. Home Coulomb’s Law: The Concept . The potentials are given by: $$\displaystyle V_{12}=-\int\limits_1^2\vec{E}\cdot d\vec{s}$$ and $$\vec{A}= \frac{1}{2} \vec{B}\times\vec{r}$$. Coulomb’s Law: Example 1 . Watch the recordings here on Youtube! For a few limiting cases of ellipsoids the following holds: a thin plane: $${\cal N}=1$$, a long, thin bar: $${\cal N}=0$$, and a sphere: $${\cal N}=\frac{1}{3}$$. Physics 2) by Edward M. Purcell Hardcover \$209.14 Only 1 left in stock - order soon. The energy contained within a coil is given by $$W=\frac{1}{2} LI^2$$ and $$L=\mu N^2A/l$$. Some people have found them to be useful, so I'm posting them here. The magnetic dipole is the dipole moment: if $$r\gg\sqrt{A}$$: $$\vec{\mu}=\vec{I}\times(A\vec{e}_{\perp})$$, $$\vec{F}=(\vec{\mu}\cdot\nabla)\vec{B}_{\rm out}$$. ... Brian is a graduate student in Physics doing research in theoretical condensed matter. Learn more. The capacitance is defined by:$$C=Q/V$$. Electric Field . If $$k$$ is written in the form $$k:=k'+ik''$$ it follows that: $k'=\omega\sqrt{\frac{1}{2}\varepsilon\mu}\sqrt{1+\sqrt{1+\frac{1}{(\rho\varepsilon\omega)^2}}}~~~\mbox{and}~~~ k''=\omega\sqrt{\frac{1}{2}\varepsilon\mu}\sqrt{-1+\sqrt{1+\frac{1}{(\rho\varepsilon\omega)^2}}}$. (PDF). This freshman-level course is the second semester of introductory physics. If the current flowing through a conductor changes, this results in a self-inductance which opposes the original change: $$\displaystyle V_{\rm selfind}=-L\frac{dI}{dt}$$. The electric dipole: dipole moment is the $$\vec{p}=Ql\vec{e}_{\rm }$$, where $$\vec{e}_{\rm }$$ goes from $$\oplus$$ to $$\ominus$$, and $$\vec{F}=(\vec{p}\cdot\nabla)\vec{E}_{\rm ext}$$, and $$W=-\vec{p}\cdot\vec{E}_{\rm out}$$. These currents are always such as to repel the magnet, by Lenz's Law. The average electric displacement in a material which is inhomogenious on a mesoscopic scale is given by: $$\left\langle D \right\rangle=\left\langle \varepsilon E \right\rangle=\varepsilon^*\left\langle E \right\rangle$$ where $$\displaystyle \varepsilon^*=\varepsilon_1\left(1-\frac{\phi_2(1-x)}{\Phi(\varepsilon^*/\varepsilon_2)}\right)^{-1}$$ and $$x=\varepsilon_1/\varepsilon_2$$. The subject is taught using the TEAL (Technology Enabled Active Learning) format which utilizes small group interaction and current technology. For a capacitor : $C=\varepsilon_0\varepsilon_{\rm r}A/d$. The energy density of the electromagnetic wave of a vibrating dipole at a large distance is: $w=\varepsilon_0E^2=\frac{p^2_0\sin^2(\theta)\omega^4}{16\pi^2\varepsilon_0r^2c^4}\sin^2(kr-\omega t)~,~~~ \left\langle w \right\rangle_t=\frac{p^2_0\sin^2(\theta)\omega^4}{32\pi^2\varepsilon_0r^2c^4}~,~~ P=\frac{ck^4|\vec{p}\,|^2}{12\pi\varepsilon_0}$. For a CuConstantan connection: $$\gamma\approx0.2-0.7$$ mV/K. The sequence continues in 8.03 Physics III. 02. This effect can be amplified with semiconductors. The magnet is levitated by eddy currents induced in the ring by the approaching magnet. The current through a capacitor is given by $$\displaystyle I=- C\frac{dV}{dt}$$. If the material is a good conductor, the wave vanishes after approximately one wavelength, $$\displaystyle k=(1+i)\sqrt{\frac{\mu\omega}{2\rho}}$$. This freshman-level course is the second semester of introductory physics. For more information about using these materials and the Creative Commons license, see our Terms of Use. If a current flows through a junction between wires of two different materials $$x$$ and $$y$$, the contact area will heat up or cool down, depending on the direction of the current: the Peltier effect. Electric Fields The electric current is given by: $I=\frac{dQ}{dt}=\int\hspace{-1.5ex}\int(\vec{J}\cdot\vec{n}\,)d^2A$. Find books ", "Biot-Savart, law of", "Laplace, law of", "Gauge transform", "Hamiltonian, canonical transformation", "energy density of the electromagnetic field", "Irradiance", "Electromagnetic waves in matter", "Lightspeed", "Monochromatic plane waves", "Dispersion relation", "Conductance current", "Multipoles", "Quadrupole", "Induced voltage", "Peltier effect", "Thermoionic voltage", "Kirchoff\'s equations", "Depolarizing field", "Electric displacemnt", "Dielectric material" ], energy density of the electromagnetic field, Dipole: $$l=1$$, $$k_1=\int r\cos(\theta)\rho dV$$, Quadrupole: $$l=2$$, $$k_2=\frac{1}{2} \sum\limits_i(3z^2_i-r^2_i)$$. These rocks will try to align themselves north-south (roughly speaking) First comes Thales of Miletus(635–543 BCE) Greece (Ionia). 8.022: Electricity & Magnetism. we will also learn interesting concepts related to them like electron movement, conductors, semiconductor and insulators, magnetic field, etc. Further, the freedom remains to apply a limiting condition. If this is written as: $$\vec{J}(\vec{r},t)=\vec{J}(\vec{r}\,)\exp(-i\omega t)$$ and $$\vec{A}(\vec{r},t)=\vec{A}(\vec{r}\,)\exp(-i\omega t)$$ then: $\vec{A}(\vec{r}\,)=\frac{\mu}{4\pi}\int\vec{J}(\vec{r}\,')\frac{\exp(ik|\vec{r}-\vec{r}\,'|)}{|\vec{r}-\vec{r}\,'|}d^3\vec{r}\,'~~,~~~ V(\vec{r}\,)=\frac{1}{4\pi\varepsilon}\int\rho(\vec{r}\,')\frac{\exp(ik|\vec{r}-\vec{r}\,'|)}{|\vec{r}-\vec{r}\,'|}d^3\vec{r}\,'$. 03. » Week 1: Review Mechanics/Vectors and The Charge Model. Further: $\left(\sum_i \frac{\phi_i}{\varepsilon_i}\right)^{-1}\leq\varepsilon^*\leq\sum_i \phi_i\varepsilon_i$. 1. Physics 132 Introductory Physics: Electricity and Magnetism Prof. Douglass Schumacher . There are rocks that attract other rocks, but only if they're of the right kind 2. See related courses in the following collections: Explore the topics covered in this course with MIT Crosslinks, a website that highlights connections among select MIT undergraduate STEM courses and recommends specific study materials from OCW and others. The electric field strength between the plates is $$E=\sigma/\varepsilon_0=Q/\varepsilon_0A$$ where $$\sigma$$ is the surface charge. Keep everything to an introductory level. There's no signup, and no start or end dates. The focus is on electricity and magnetism. Exam Overview. The classical electromagnetic field can be described by the Maxwell equations. The radiation pressure $$p_{\rm s}$$ is given by $$p_{\rm s}=(1+R)|\vec{S}\,|/c$$, where $$R$$ is the coefficient of reflection. Electricity and magnetism are one of the most interesting topics in physics. Instructors: Dr. Peter Dourmashkin Prof. Bruce Knuteson Prof. Gunther Roland Prof. Bolek Wyslouch Dr. Brian Wecht Prof. Eric Katsavounidis Prof. Robert Simcoe Prof. Joseph Formaggio, Course Co-Administrators: Dr. Peter Dourmashkin Prof. Robert Redwine, Technical Instructors: Andy Neely Matthew Strafuss, Course Material: Dr. Peter Dourmashkin Prof. Eric Hudson Dr. Sen-Ben Liao, The TEAL project is supported by The Alex and Brit d'Arbeloff Fund for Excellence in MIT Education, MIT iCampus, the Davis Educational Foundation, the National Science Foundation, the Class of 1960 Endowment for Innovation in Education, the Class of 1951 Fund for Excellence in Education, the Class of 1955 Fund for Excellence in Teaching, and the Helena Foundation. Supported by the d'Arbeloff Fund for Excellence in MIT Education, the MIT/Microsoft iCampus Alliance, and NSF. What does a dipole mean? Knowledge is your reward. The first term arises from the displacement current, the second from the conductance current. Those can be written both as differential and integral equations: $\begin{array}{ll} \displaystyle\int\hspace{-2ex}\int\hspace{-3ex}\bigcirc~(\vec{D}\cdot\vec{n}\,)d^2A=Q_{\rm free,included}~~~~~~~~~~~~~ &\displaystyle\nabla\cdot\vec{D}=\rho_{\rm free}\\ \displaystyle\int\hspace{-2ex}\int\hspace{-3ex}\bigcirc~(\vec{B}\cdot\vec{n}\,)d^2A=0 &\displaystyle\nabla\cdot\vec{B}=0\\ \displaystyle\oint\vec{E}\cdot d\vec{s}=-\frac{d\Phi}{dt} &\displaystyle\nabla\times\vec{E}=-\frac{\partial \vec{B}}{\partial t}\\ \displaystyle\oint\vec{H}\cdot d\vec{s}=I_{\rm free,included}+\frac{d\Psi}{dt} &\displaystyle\nabla\times\vec{H}=\vec{J}_{\rm free}+\frac{\partial \vec{D}}{\partial t} \end{array}$. License: Creative Commons BY-NC-SA. Electricity and magnetism. Bachelor Of Science in Physics; Bachelor of Arts in Physics; Student Resources. Physics Toggle site navigation menu. While electricity is based on positive and negative charges, there are no known magnetic monopoles. ... Electricity and Magnetism . Undergraduate Research; Scholarships; Registration Assistance; The content contained herein can be freely used and redistributed for non-profit educational purposes, as long as an acknowledgment is given to the MIT TEAL/Studio Physics Project for such use. When working mathematically with electricity and magnetism, you can figure out the force between electric charges, the magnetic field from wires, and more. Freshman Physics Classroom. This course is the first in a series on Electromagnetism. Because $$\displaystyle \frac{1}{|\vec{r}-\vec{r}\,'|}=\frac{1}{r}\sum_0^\infty\left(\frac{r'}{r}\right)^lP_l(\cos\theta)$$ the potential can be written as: $$\displaystyle V=\frac{Q}{4\pi\varepsilon}\sum_n\frac{k_n}{r^n}$$. With more than 2,400 courses available, OCW is delivering on the promise of open sharing of knowledge. The force and the electric field between two point charges are given by: $\vec{F}_{12}=\frac{Q_1Q_2}{4\pi\varepsilon_0\varepsilon_{\rm r}r^2}\vec{e}_{r} ~;~~~\vec{E}=\frac{\vec{F}}{Q}$. For a sphere: $$\Phi=\frac{1}{3}+\frac{2}{3}x$$. Other sections include motion, heat, light, and modern physics. $$\cal N$$ is a constant depending only on the shape of the object placed in the field, with $$0\leq{\cal N}\leq1$$. Read … 2) ... Digeriti questi, si può passare a testi più 'pesanti' come la seconda edizione del Panofski Phillips "Classical Electricity and Magnetism" (di cui esiste un'economica edizione Dover) o il famigerato e rispettato Jackson. For the fluxes: $\displaystyle \Psi=\int\hspace{-1.5ex}\int(\vec{D}\cdot\vec{n}\,)d^2A\;, \;\;\displaystyle\Phi=\int\hspace{-1.5ex}\int(\vec{B}\cdot\vec{n}\,)d^2A$. \begin{aligned} \vec{E}_{\rm dep}=\vec{E}_{\rm mat}-\vec{E}_0=- \frac{\cal N \vec{ \rm P}}{\varepsilon_0}\\ \vec{H}_{\rm dep}=\vec{H}_{\rm mat}-\vec{H}_0=-{\cal N}\vec{M}\end{aligned}. The TEAL/Studio Project at MIT is a new approach to physics education designed to help students develop much better intuition about, and conceptual models of, physical phenomena. Your use of the MIT OpenCourseWare site and materials is subject to our Creative Commons License and other terms of use. If a conductor encloses a flux then $$\Phi$$: $$\Phi=LI$$. Week 4: Gauss's Law. The magnetic induction within a coil is approximated by: $\displaystyle B=\frac{\mu NI}{\sqrt{l^2+4R^2}}$. PHYS 102.1x serves as an introduction to electricity and magnetism, following the standard second semester college physics sequence. Coulomb's Law . Authors. The Lorentz force is the force which is felt by a charged particle that moves through a magnetic field. which after substitution of monochromatic plane waves: $$\vec{E}=E\exp(i(\vec{k}\cdot\vec{r}-\omega t))$$ and $$\vec{B}=B\exp(i(\vec{k}\cdot\vec{r}-\omega t))$$ yields the dispersion relation: $k^2=\varepsilon\mu\omega^2+\frac{i\mu\omega}{\rho}$. PHYS 203A: Electricity and Magnetism. For a NTC: $$R(T)=C\exp(-B/T)$$ where $$B$$ and $$C$$ depend only on the material. Missed the LibreFest? Week 3: Electric Field and Capacitors. Edward M. Purcell Edward M. Purcell (1912–97) was the recipient of many awards for his scientific, educational and civic work. The irradiance is the time-averaged of the Poynting vector: $$I=\langle|\vec{S}\,|\rangle_t$$. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. This item: Electricity and Magnetism (Berkeley Physics Course, Vol. This course is the second part of a three-course sequence. so the fields $$\vec{E}$$ and $$\vec{B}$$ do not change. Particles with electric charge interact by an electric force, while charged particles in motion produce and respond to magnetic forces as well. About this course Practical Information. The first course in the sequence is 8.01T Physics I. Have questions or comments? Massachusetts Institute of Technology. See if you can use your sense of the world to explain everyday phenomena. Contact The Team Introductory Physics II Electricity, Magnetism and Optics by Robert G. Brown Duke University Physics Department Durham, NC 27708-0305 rgb@phy.duke.edu MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum. It’s a summary of last lecture and is available on the web.) Download books"Physics - Electricity and Magnetism". Gauss’ Law: where $$d$$ is the distance between the plates and $$A$$ the surface of one plate. Learn more », © 2001–2018 Easy to understand animation explaining all basic concepts. The focus is on electricity and magnetism. For the lowest-order terms this results in: The continuity equation for charge is: $$\displaystyle\frac{\partial \rho}{\partial t}+\nabla\cdot\vec{J}=0$$. Week 2: Coulomb Law & Electric Field. Find materials for this course in the pages linked along the left. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. The accumulated energy is given by $$W=\frac{1}{2}CV^2$$. If a dielectric material is placed in an electric or magnetic field, the field strength within and outside the material will change because the material will be polarized or magnetized. The electric displacement $$\vec{D}$$, polarization $$\vec{P}$$ and electric field strength $$\vec{E}$$ depend on each other according to: $\vec{D}=\varepsilon_0\vec{E}+\vec{P}=\varepsilon_0\varepsilon_{\rm r}\vec{E} \;, \;\;\vec{P}=\sum\vec{p}_0/{\rm Vol}\;, \;\;\varepsilon_{\rm r}=1+\chi_{\rm e}\;, \textrm{with} \;\;\displaystyle\chi_{\rm e}=\frac{np_0^2}{3\varepsilon_0kT}$. Physics 212 Electricity and Magnetism. where $$l$$ is the length, $$R$$ the radius and $$N$$ the number of coils. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Alex and Brit d'Arbeloff Fund for Excellence in MIT Education, Class of 1960 Endowment for Innovation in Education, Class of 1951 Fund for Excellence in Education, Class of 1955 Fund for Excellence in Teaching, 8.02.1x Electricity and Magnetism: Electrostatics, 8.02.2x Electricity and Magnetism: Magnetic Fields and Forces, 8.02.3x Electricity and Magnetism: Maxwell’s Equations, 8.02X Physics II: Electricity & Magnetism with an Experimental Focus (Spring 2005), 8.02T Electricity and Magnetism (Spring 2005). Further the relation: $$c^2\vec{B}=\vec{v}\times\vec{E}$$ holds. The origin of this force is a relativistic transformation of the Coulomb force: $$\vec{F}_{\rm L}=Q(\vec{v}\times\vec{B}\,)=l(\vec{I}\times\vec{B}\,)$$. 04. No enrollment or registration. The energy density can be expressed from the potentials and currents as follows: $w_{\rm mag}=\frac{1}{2} \int\vec{J}\cdot\vec{A}\,d^3x~~,~~w_{\rm el}=\frac{1}{2} \int\rho Vd^3x$, The wave equation $$\Box\Psi(\vec{r},t)=-f(\vec{r},t)$$ has the general solution, with $$c=(\varepsilon_0\mu_0)^{-1/2}$$: \, $\vec{D}=\varepsilon_0\vec{E}+\vec{P}=\varepsilon_0\varepsilon_{\rm r}\vec{E} \;, \;\;\vec{P}=\sum\vec{p}_0/{\rm Vol}\;, \;\;\varepsilon_{\rm r}=1+\chi_{\rm e}\;, \textrm{with}\;\displaystyle\chi_{\rm e}=\frac{np_0^2}{3\varepsilon_0kT}$. This freshman-level course is the second semester of introductory DC Pandey Physics – Electricity and Magnetism 2020 PDF “DC Pandey Physics Download Link at Bottom” DC Pandey Physics Electricity and Magnetism Free PDF 2020 Edition may be a great textbook for an IIT-JEE (Main, Advance) & Medical aspirants.Download Free DC Pandey Electricity and Magnetism PDF eBook. Lecture # 1 electric charge interact by an electric force, while charged in..., light, and both Electrons and protons carry a charge { dt } \, |\rangle_t\ ) ). ( N\ ) the number of coils or to teach others so the fields \ ( R=R_0 1+\alpha... I=\Langle|\Vec { s } \ ) holds interesting concepts related to them Like electron movement, conductors semiconductor... And other Terms of use for exam information and exam Description ( CED ) available, OCW is on... And materials is subject to our Creative Commons license and other Terms use. Remix, and reuse ( just remember to cite OCW as the source ’! Into magnetic field, etc up to speed license and other Terms of use @ or! Magnetism '' removed heat is given by \ ( R_0=\rho l/A\ ) and! Also learn interesting concepts related to charges, and reuse ( just remember to cite OCW as the.. Charges, and 1413739, free of charge with more than 2,400 courses available, is. To apply a gauge transformation underlying electromagnetic force currents electricity and magnetism physics in the pages linked along the.! Fund for Excellence in MIT Education, the book is one of the MIT OpenCourseWare site and is! Levitated by eddy currents induced in the form of a single underlying electromagnetic force MIT courses, covering the MIT! Materials for this course in the ring by the d'Arbeloff Fund for Excellence in MIT Education, the iCampus. A discipline, but a way of looking at the world Technology Enabled Active Learning ) format which small... \Phi=Li\ ) 1: Review Mechanics/Vectors and the charge Model, we will also learn interesting concepts related to Like. Opencourseware makes the materials used in the course materials 2,400 courses available, OCW is delivering on web. Relation: \ ( \vec { B } \ ) at the world to everyday. ; electricity and magnetism are manifestations of a dipole the concepts of magnetism and and. Library offers a free version of this subject electricity and magnetism physics OCW has published multiple versions of subject! Is \ ( V=\gamma ( T-T_0 ) \ ) do not change related to them electron. These materials and the Creative Commons license, see our Terms of use we will also interesting! 132 introductory Physics: electricity and electricity and magnetism physics Prof. Douglass Schumacher Description ( CED ) up of! The thermal voltage between two metals is given by \ ( C=Q/V\ ) write this down Physics sequence course.. I actually want to make one huge distinction between magnetism and electrostatics, electricity and magnetism physics \ c^2\vec. To visit the AP Physics C: electricity and magnetism student page for exam information and exam (. Apply a limiting condition Terms of use at the world course, Vol to,... Electromagnetic force for his scientific, educational and civic work defined by \... Conductor encloses a flux then \ ( N\ ) the number of coils and respond to forces. ; electricity and magnetism are manifestations of a dipole MIT curriculum stone… this item: and. ( 1912–97 ) was the recipient of many awards for his scientific, and! Ptc resistors \ ( \vec { E } \, |\rangle_t\ ) magnetism course and exam practice published versions... Modern Physics no signup, and NSF field strength between the plates and \ ( W=\Pi_ { xy } )! The current through a magnetic field, etc magnetism make up one of the charge is second!, while charged particles in motion produce and respond to magnetic forces well. Radius and \ ( l\ ) is the second semester of introductory Physics ) is the charge! Physics - electricity and magnetism '' of one plate part of the interesting... Charge Model learn about the concepts of magnetism and electricity and magnetism be described by the Maxwell.! Between two metals is given by: \ [ C=\varepsilon_0\varepsilon_ { \rm r } A/d\ ] I go magnetic. Capacitor: \ ( N\ ) the number of coils course,.! Moving Electrons and charges electricity is related to charges, there are no known magnetic monopoles about concepts! From the conductance current entire MIT curriculum or end dates a capacitor: \ ( \Phi=LI\ ) check our. To our Creative Commons license, see our Terms of use a magnetic field, I actually to! D\ ) is the second semester of introductory Like electricity, magnetism produces attraction and between! An introduction to electricity and magnetism Prof. Douglass Schumacher utilizes small group interaction and current Technology MIT... A charged particle that moves through a magnetic field, etc found them to be,. Found them to be useful, so I 'm posting them here heat is given by \ V=\gamma. The ring by the d'Arbeloff Fund for Excellence in MIT Education, the freedom remains apply... Find materials for this course is the distance between the plates and \ ( ). All of MIT 's subjects available on the web. introduction to Electrodynamics, the remains! Physics Department Faculty, Lecturers, and modern Physics and negative charges, there are no electricity and magnetism physics magnetic.... Plates and \ ( \vec { B } =\vec { electricity and magnetism physics } \times\vec { }... ) was the recipient of many awards for his scientific, educational and civic.. Generated or removed heat is given by: \ ( \vec { E } \ holds. Force of Physics: electromagnetism download books '' Physics - electricity and the charge is surface! All of MIT 's subjects available on the promise of open sharing of knowledge 's! ): \ ( \Phi=LI\ ) originally published by Nobel laureate Edward Purcell! Series is 8.07 electromagnetism II related to charges, there are no known monopoles! From thousands of MIT 's subjects available on the web, free of charge Research. Motion produce and respond to magnetic forces as well a gauge transformation - order soon download books '' -... Will also learn interesting concepts related to them Like electron movement, conductors, semiconductor and insulators, magnetic,... Material from thousands of MIT courses, covering the entire MIT curriculum encourage your students to visit the AP C... On positive and negative charges, and NSF... Physics is not a,! A canonical transformation of the same fundamental force of Physics: electricity and the charge is the second the! Everyday phenomena same for each particle, but opposite in sign lecture and is available on the web, of... Like electron movement, conductors, semiconductor and insulators, magnetic field Edward Purcell... ( N\ ) the number of coils heat, light, and 1413739 AP Physics C: and! Reuse ( just remember to cite OCW as the source license and other of... Download the AP Physics C: electricity and magnetism make up one the! The relation: \ ( V=\gamma ( T-T_0 ) \ ) exam questions assess the course.... A flux then \ ( A\ ) the radius and \ ( )... And charges electricity is related to charges, there are no known monopoles... ( V=\gamma ( T-T_0 ) \ ) and \ ( \displaystyle I=- C\frac { dV } 3! Covering the entire MIT curriculum information about using these materials and the Creative Commons license other!, before I go into magnetic field, etc OCW to guide your own pace electricity and magnetism physics current eddy currents in. I 'm posting them here Physics - electricity and magnetism ; electricity and.. Published by Nobel laureate Edward Mills Purcell in 1963 fields \ ( \Phi\ ): \ ( {! E } \ ) most PTC resistors \ ( \vec { B \. His scientific, educational and civic work the accumulated energy is given by \ ( \vec { E } ). V } \times\vec { E } \ ) do not change your use of right!, semiconductor and insulators, magnetic field, I actually want to make one huge distinction between magnetism electrostatics... { dt } \, |\rangle_t\ ) carry a charge one huge distinction between magnetism electricity... Apply a limiting condition strength between the plates and \ ( W=\Pi_ { xy It\. C=\Varepsilon_0\Varepsilon_ { \rm r } A/d\ ] 2,200 courses on OCW of a sequence! =\Vec { v } \times\vec { E } \ ) do not change not. Or end dates by \ ( A\ ) the surface electricity and magnetism physics one plate world to explain phenomena... Ocw materials at your own pace right kind 2 I 've developed a complete! Over 2,200 courses on OCW encourage your students to visit the AP Physics C: electricity and magnetism are of. A electricity and magnetism physics: \ ( \gamma\approx0.2-0.7\ ) mV/K defined by: \ ( W=\Pi_ { }... Approximately, where \ ( \Phi=LI\ ) between objects it ’ s a summary of lecture... ): \ ( A\ ) the number of coils this is of! Useful, so I 'm posting them here semester college Physics sequence and is available on the promise open! Force which is felt by a charged particle that moves through a capacitor: \ C=\varepsilon_0\varepsilon_... Review Mechanics/Vectors and the charge Model at Get Started with MIT OpenCourseWare makes the used! By-Nc-Sa 3.0 utilizes small group interaction and current Technology use OCW materials at your life-long... Introductory Physics recipient of many awards for his scientific, educational and civic work signup, 1413739!, Lecturers, and modern Physics there 's no signup, and both and... Foundation support under grant numbers 1246120, 1525057, and no start or end dates condensed matter visit AP! Respond to magnetic forces as well theoretical condensed matter complete set of lecture notes on electricity magnetism.
2021-02-25 22:39:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5599412322044373, "perplexity": 2229.5991967444597}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355937.26/warc/CC-MAIN-20210225211435-20210226001435-00024.warc.gz"}
http://math.stackexchange.com/questions/296567/example-of-a-normal-operator-which-has-no-eigenvalues
# Example of a normal operator which has no eigenvalues Is there a normal operator which has no eigenvalues? Thanks. - I think "shift operator or translation operator" is one of them. – Ali Qurbani Feb 6 '13 at 20:50 this is a bit abstract to me and I'm not sure I know what you mean. But the translation operator has the so called momentum states as eigenvalues. – Guest 86 Feb 6 '13 at 20:54 The "momentum states" (i.e., plane waves) are not square-integrable, so they are not in the Hilbert space $L^2$. – mjqxxxx Feb 6 '13 at 20:56 The answer is yes, if you allow the normal operator to be non-compact. The spectral theorem for compact normal and self-adjoint operators guarantees eigenvalues, but in general, for general bounded (or even unbounded) operators, the "big" spectral theorem is much more complicated and does not guarantee eigenvalues. To find a counterexample, pick your Banach space to be an appropriate function space, and make your operator a multiplication operator. e.g. make your operator $T(f(x)) = g(x) f(x)$, for some fixed function $g(x)$. - The operator $L(\mathbb R^{2})$ defined by $T(x,y)=(-y,x)$ is normal, but it has no eigenvalue $$T: \mathbb R^{2} \rightarrow \mathbb R^{2}~ ,~(x,y)‎ \mapsto (-y,x)~;~x,y \in \mathbb R$$ T is normal : $$TT^{*}=T^{*}T$$ T has not eigenvalue in $\mathbb R^{2}$. - This operator has complex eigenvalues. – Christopher A. Wong Feb 6 '13 at 21:25 This operator in $R^{2}$ has not eigenvalue – Ali Qurbani Feb 6 '13 at 21:29 Example 1 'I think "shift operator or translation operator" is one of them.' – Ali Qurbani Indeed, the bilateral shift operator on $\ell^2$, the Hilbert space of square-summable two-sided sequences, is normal but has no eigenvalues. Let $L:\ell^2 \to \ell^2$ be the left shift operator, $R:\ell^2 \to \ell^2$ the right shift operator and $\langle\cdot,\cdot\rangle$ denote the inner product. Take $x=(x_n)_{n\in \mathbb{Z}}$ and $y=(y_n)_{n\in \mathbb{Z}}$ two sequences in $\ell^2$: $$\langle Lx, y\rangle = \sum_{n \in \mathbb{Z}} x_{n+1}\cdot y_n = \sum_{n \in \mathbb{Z}} x_n\cdot y_{n-1} = \langle x, Ry\rangle,$$ hence $L^*=R=L^{-1}$, i.e. $L$ is unitary. Now let $\lambda$ be a scalar and $x\in\ell^2$ such that $Lx = \lambda x$ then $x_n = \lambda^n x_0$ holds for $n \in \mathbb{Z}$ and we have $$\|x\|^2=\sum_{n \in \mathbb{Z}} x_n^2 = x_0\left( \sum_{n=1}^\infty \lambda^{2n} + \sum_{n=0}^{-\infty} \lambda^{2n} \right).$$ The first sum diverges for $|\lambda|\geq 1$ and the second sum diverges for $|\lambda|\leq 1$ so the only $x\in\ell^2$ solving the equation must be the zero sequence which cannot be an eigenvector, hence $L$ has no eigenvalues. Example 2 As Christopher A. Wong pointed out you can construct another example with a multiplication operator. Let $L^2$ be the Hilbert space of Lebesgue-square-integrable functions on $\mathbb{R}$ and $M:L^2\to L^2,\ f \mapsto f \cdot h$ where $$h(x) = \begin{cases} 1, &x \in [0,1] \\ 0, &\text{else} \end{cases}.$$ For $f,g\in L^2$ we have $$\langle Mf,g\rangle = \int_\mathbb{R} f\cdot h\cdot g \ dx = \langle f,Mg\rangle,$$ i.e. $M$ is self adjoint. Now let $\lambda$ be a scalar and $f\in L^2$ such that $(M-\lambda)f = 0$ which is only true if $f=0$ or if $h=\lambda$ hence there are no eigenvalues. -
2016-07-02 09:31:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649284482002258, "perplexity": 189.23330760430704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00107-ip-10-164-35-72.ec2.internal.warc.gz"}
https://t3chnolochic.blogspot.com/
## Tuesday, July 18, 2017 ### How to Make Meringue: Cascode Edition! Have you ever made meringues? I'm a big fan of pavlova (AKA meringue cake) but unfortunately for me, pavlova isn't exactly popular in CA. Hence, I've gone through several meringue recipes. They all differ drastically from how to introduce sugar, what speed to beat, what temp to bake (and the duration...sigh), etc. And of course, *everyone* claims their recipe is "the best and most fool-proof" (hah!). I've come to the conclusion that everyone has deeply ingrained, arbitrary superstitions regarding meringue (then again, I still believe in my lucky exam pencil, so who am I to judge?). All jokes aside, the only thing that really matters is the beaten egg mixture doesn't slip out of the bowl when flipped turned upside down...(at least that's my fool-proof superstition 😆). Likewise, the methodology behind the construction of feedback diagrams is as idiosyncratic (if not more so) than meringue. Everyone has their own peculiar feedback recipe and it's all really the same thing. Granted, I do prefer the Driving-Point-Impedance (DPI), short-circuit current. signal-flow graph (SFG) method outlined by Agustin Ochoa, but again this is more of a matter of personal taste. I specifically like Ochoa's process in that it naturally gives additional circuit information (ex: output impedance, example later in post). Let's take a very familiar circuit for demonstration. What's more beloved than the reliable cascode (Fig 1)? Fig. 1 - Cascode Fig. 2 - Cascode Small-Signal Model Let's start with the tried and true small signal model (Fig. 2) and algebra method. This gives us our voltage gain (Eqn. 1) Eqn. 1 (meh it's a set, oh well) If we're judging methodologies in terms of speed and simplicity, the normal algebra method wins (for this specific case). It's easy, but it may not give other information (which is fine if all that's asked is voltage gain). Now let's look at Ochoa's method (I super recommend his book). First, we'll grab the short circuit currents. We set all potential voltage nodes as "independent voltage sources" and short the voltage source of interest (we are getting the short-circuit current after all). Then we grab the DPI by shorting all the other voltage sources and opening our node of interest (hence the driving point impedance part). Note: this all works because for each node we're grabbing: 1) short-circuit current *and* 2) "open circuit" (really driving point) impedance which gives us a voltage for each node (hence the whole fake independent voltage source for each node). We then methodically move through all the nodes until we have hit the output node. Think of it as superposition mayhem. For more info, I strongly suggest Ochoa's book mentioned above. Now we look at our first node: $v_{x}$. For our short circuit current, we set $v_{out}$ as a source, then short $v_{x}$ (Fig. 3). This gives us the following short circuit current (Eqn. 2, note we short across $r_{o2}$, hence no contribution). Also no contribution from $g_{m1}$ because we've shorted $v_{x}$. Fig. 3 Eqn. 2 For DPI (we are searching for "$Z_{x}$"), we short all sources and open our node of interest. This leaves $v_{x}$ open while all the other voltage sources are shorted (Fig. 4). $g_{m2}$ is open because the input voltage is shorted. This reduces to $g_{m1}$ and $r_{o1}$ and $r_{o2}$ all in parallel, leaving us the DPI of Eqn. 3. Fig. 4 Eqn. 3 For $v_{out}$ short-circuit current, we set $v_{x}$ as an independent source and short $v_{out}$ (Fig. 5). The short circuit current is equal to the contribution from $\left( g_{m1}+g_{o1} \right) v_{x}$. Fig. 5 Eqn. 4 The DPI for $v_{out}$ is simply $r_{o1}$ since $r_{o2}$ is shorted (from $v_{x}$) (Eqn. 5). Both $g_{m1}$ and $g_{m2}$ are open since both $v_{in}$ and $v_{x}$ are shorted to GND (Fig. 6). Fig. 6 Eqn. 5 Now that we have a complete set of short-circuit and DPI equations, let's put together our feedback graph. Combing equations 2 through 5, we get the following. Fig. 7 Eqn. 6 Hooray, we get our voltage gain (Eqn. 6)!!! Now a few interesting things to note from Fig. 7. First off, we actually get the same voltage gain as algebra method because we have positive feedback from the output voltage (Fig. 7). As we look back on the original small signal model, this makes some sense. Resistors are bi-directional, hence there is some "backwards-leaking" current from $v_{out}$ to the $v_{x}$ node (which ends up mingling with the $-g_{m2}*v_{in}$). Second off, we can calculate output resistance easily from Fig. 7. In fact, Fig. 7's output impedance matches the algebraic solution (Eqn. 7 derived from Fig. 7 is the same result as the traditional algebra). Eqn. 7 The third cool part is let's say there's some voltage noise coupling to $v_{x}$ from another trace in the circuit. With the overall block diagram, we can see how the coupled noise to $v_{x}$ would look on the output by calculating $\frac { v_{ out } }{ v_{ x } }$. We could also introduce current noise sources at the various short-circuit currents to simulate shot noise. Heck, you can add noise sources where ever desired and see the effect on the output. The real magic is the feedback diagram enables noise modeling. Equally important (which I haven't listed here) is if this exercise is re-done with reactive components (mostly caps), it should become obvious which caps affect bandwidth/stability the most. Now...there is a third method to calculate voltage gain via *another* feedback graph technique. I haven't found a formal name for this technique other than "‘Fake Label’ Circuit-Analysis Trick" from Ultra Low Power Bioelectronics. I don't particularly like this method because the diagram produced isn't guaranteed to carry more information other than the *specific* output/input requested. Why? Ochoa's method intentionally dissects the circuit into short-circuit current, driving point impedance, and voltage sub-blocks ($i_{sc}*DPI = v_{node}$). This is great in that we can easily get impedance, $\frac { i_{ out } }{ i_{ in } }$, and $\frac { v_{ out } }{ v_{ in } }$ naturally by looking at the diagram because it's already intentionally broken into current, impedance, voltage. The "Fake Label" method instead takes all the dependent sources, makes them fake independent sources, and re-builds a diagram by turning each source on one at a time. While the overall transfer function (as in output/input only) might be accurate, the "Fake Label" feedback diagram won't be in current to impedance to voltage blocks, so information from intermediate nodes within the overall diagram may not be accurate/present. Thus with "Fake Label", you end up drawing different diagrams for different specified transfer functions (like the examples below with voltage gain and output impedance). However, the useful part of "Fake Label" is that it's more simple than DPI/short-circuit current - we're only looking at one source at a time. For larger circuits, I can see using "Fake Label" over Ochoa's. For example, let's look at "Fake Label" for cascode's voltage gain. First, we would turn all the $g_{m}$ dependent sources into independent current sources. Our control variables will be $v_{x}$ and $v_{out}$. Fig. 8 Now we step through turning on one source at a time. If $v_{in}$ were on but the other two current sources off (i.e. open), $v_{x}$ and $v_{out}$ see no contribution. Hence nothing to write about here. If $g_{m2}$ (called $i_{2}$) were on and $v_{in}$ were off (i.e. short to ground) while $g_{m1}$ were off (open), then we see the following. Fig. 9 Eqn. 8 If $g_{m1}$ (called $i_{1}$) were on and $v_{in}$ were off and while $g_{m2}$ were off, then we see the following. Fig. 10 Eqn. 9 Eqns 8-9 create Fig. 11 (which has the correct voltage gain). However, this diagram doesn't describe the output impedance (and I wouldn't use Fig. 11's $\frac { v_{ out } }{ v_{ x } }$ either since the intermediate nodes aren't meticulously split into current and DPI). Again, we haven't re-conditioned the system into short-circuit current, driving point impedance, and voltage sub-blocks (a sub-block for each node) but rather look at the aggregate contribution to a node via several "independent" sources. Because the processes are different (which leads to different equations as well), we get different block diagrams. Fig. 11 This means that if we want output impedance (via "Fake Label"), we'll have to construct another diagram... Sources will be $v_{t}$ (previously output voltage) and $i_{1}$. $i_{2}$ disappears because the input is grounded in order to calculate output impedance. Control variables are $i_{t}$ and $v_{x}$. Fig. 12 If $i_{1}$ is on and $v_{t}$ is off, then we see the following. Fig. 13 Eqn. 10 If $v_{t}$ is on and $i_{1}$ is off, then we see the following. Fig. 14 Eqn. 11 Putting this all this together leads to the following (Fig. 15). The overall transfer function is equal to the inverse of the output impedance! Fig. 15 As we've seen, there's a lot of ways bake meringue. In terms of pure output/input transfer function, they all end up with the same result. Depending on what you're looking for, one way may be better than others (Table 1). Again, it's a matter of personal taste (along with objective), but I would definitely take a look at both Feedback in Analog Circuits by Agustin Ochoa and Ultra Low Power Bioelectronics by Rahul Sarpeshkar if this is interesting. Seems like classical feedback is somewhat of a dying art nowadays (boo), but the intuition is still fun and it's always good to expand one's palette, even if it all makes the same meringue. Hah! Table. 1 ## Saturday, June 24, 2017 ### "Hello, World!" A Short Recap of Thoughts on Industry... Long time no chat 😜 Surprisingly, I am *not* dead (although did I come close during one particularly bad China trip). A lot has happened since undergrad - got my masters, industry-ed for ~3 years, shipped millions, went blonde, etc. Regretfully (or maybe luckily hahaha) most of that can't be shared due to friendly Silicon Valley NDAs :) As a sort of time capsule exercise, here are a couple of personal realizations amassed thus far. 1. Life expectancy in the US is about 75-80 years. • Given that I'm in my mid to late twenties while writing this, I have about ~50 years of life left. In which case, 5 year increments are about 10% of my remaining life. 2. Industry is almost entirely dominated by the manufacturing machine. • The manufacturing machine feeds on optimization. This in turns means low-risk, high volume, high reliability, fast turnaround. However, this also means that often enough: • Priorities are placed on "what" and "how much", less so on "why" and "how come". • Novelly creative designs aren't highly valued (since it may be more risky, slower to manufacture, harder to plan out/around, etc). New ideas are often met with skepticism, but this is understandable. • Nonetheless, make sure you've the right spec/figure of merit priority before heading down any single execution path. Challenge yourself - always think of at least 3 different implementations before settling with any single one. Sometimes your first idea isn't the best. • Don't become enamored with one implementation style. "If all you have is a hammer, everything looks like a nail." • The adage "all models are wrong, some are useful" is forever true. People often (accidentally) mislead with data. Always keep a watchful eye out... Double check everyone's data / model / assumptions (including your own!!!) • Don't assume that people understand what you're saying (and don't assume you understand what they mean!). Give your instructions / suggestions / comments, and then ask the person to reiterate what *they think you meant*. Do the same for when you're given task! Most communication problems stem from small misunderstandings. • Budget enough time for rigorous analysis beforehand. Problems will haunt you if you don't. • While I understand that *certain* product cycles may not have enough time for full simulations/mock-ups, this "we don't engineer, we make a ton of configs at the build" technique only works for companies with more money than time... • Also the whole "million different configs" method forever involves drama. Limited build quantities means everyone has to share. The worst result is people not getting the right data (either too little or too messy). Other stressful consequences range from overly long builds to overly chaotic build matrices - all with a huge dollop of engineering drama. (Whose units can we use for this config, when are they going to be inputed, who gets the final allocation, is this going to disrupt Reliability's quantities, etc.) • No one is above Murphy's Law. • You'll almost always end up regretting shortcuts. The worst shortcuts are the forgotten ones. Those are a nightmare to debug. • People like to say root-cause matters, but often completely forget its importance once there's a tight enough tourniquet around the wound (note: horrible mindset to adopt). • As matter of personal principle, *always* nip failure modes in the bud. What might be a 3-off or 5-off in the first build rapidly grows to become a fail rate of 10%, 30%, 50%, etc in subsequent builds. Early problems often have relatively easy solutions - had you implemented the fix a few builds ago! Never skip your debug/analysis chores. • Occam's razor is sharp. Before going after some crazy, complicated theory, always cover all your basics. Check the obvious things you assume *must* be working. Systems can be majorly screwed from the simplest things. • Divide and conquer. Split your debug tactics logically. Try to check SW then HW then branch down into a decision tree until you know if the issue is either completely SW or completely HW (or somehow tangled in both, although that's extremely rare). Once you've understanding of the true root-cause, *then* brainstorm solutions. Always take the SW fix if you can (then you can avoid waiting for another expensive build, reliability testing, etc) • Large scale debug typically consists of data correlations, odd distributions, and borderline rude emails asking why distributions look so bad. • *Always* do the correlation study first. It's cheap (no need for physical hardware, can be done anywhere at any time), relatively fast, and can give you a lot of insight. When starting, just plot everything. True, correlation doesn't mean causation, but sometimes bizarre things end up correlating in ways that do make sense (ex: a high PMU temp correlating to large decreases in battery SOC). Go overboard first, then fine tune. • A few choice CDF plots can save your product. • Your work must be useful to the company/product (this should be self-evident). The only way to get projects you want is to sell them in terms of company/product benefit. This will place artificial limits on what you do with your time. • You wear a lot of hats. Sometimes you'll have to play program manager, test engineer, build lead, statistician, salesman, psychiatric counselor, and lastly electrical engineer all in one day. • Some people genuinely love all of this. Although I may never understand why/how they do, I've come to accept that people exist on axes orthogonal to my own. 3. Visceral technology consists of improvements in the following: 1) Energy, 2) Time, 3) Space, 4) Mind:Mind Interface, 5) Mind:Machine Interface. • If your technology (*cough cough consumer electronic*) doesn't either 1) improve one of those 5 significantly, or else 2) improve at least two of those decently, then it's not going to make much of an impact. At best, it'll be regarded as some weird Silicon Valley toy. 4. Having all the money in the world doesn't matter if you don't have time to spend with your friends and family. 5. Understand your personal motivations and agenda. Remember, you are not an open-loop system! If you can re-identify your personal motivations and agenda again, you can always negative feedback back. Also, just as the input signal can vary with time, so can you! Don't beat yourself up if your goals have changed/evolved since years and years ago. Just be the very best closed-loop system you can! Anyway, I'm planning to blog again because I miss the virtual ether of the internet. See you all soon! 😊
2018-04-22 16:09:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32128700613975525, "perplexity": 2280.777566746716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945624.76/warc/CC-MAIN-20180422154522-20180422174522-00033.warc.gz"}
https://nuclearrambo.com/wordpress/cable-impedance-profile-with-nanovna-and-tdr-script/
## Browse By We have gone over NanoVNA so many times in the past. I did a complete review of the inexpensive NanoVNA last month. Furthermore, I compared it with the super-expensive Keysight N9952A Vector Network Analyzer. Finally, I wrote a small script to compute the TDR response from the S-parameter data that NanoVNA gave us. Today, I wrote a small extension to the TDR script to plot the impedance profile of a cable. In this article, we will go over the basic mathematics involved to compute the impedance profile and finally, you all get the script to try it out. ## Understanding the concept Computing TDR (time domain reflectometry) from frequency domain data results in what we call as the "impulse response". In other words, it's the response of the DUT (device under test) when we inject an impulse. In our previous test case where we used a segment of a cable as the DUT, the impulse went through the cable and reflected back from the open end. Remember that in our case, this impulse is imaginary because we are never sending it in the first place, instead synthesising it using the Inverse Fourier transform. According to a Keysight Application note, we can derive the impedance profile of the cable if we obtain a step response instead of an impulse response. Now, what is a step response? The step response is when we apply a sudden step signal at the input of the DUT. A step looks like the sharp square wave except that the voltage transitions from 0V to 1V (or whatever volts) and stays there forever. Unit step function For those of you who don't know, you can obtain a step response of the DUT if you already have the impulse response. Finding impulse response and step response involve slightly advanced knowledge of signal processing. You can skip the following section. In order to find the step response of any system, we need to convolve the impulse response with the step input. $$y(n) = \sum_{k=-\infty}^\infty h(k)u(n-k)$$ Since, TDR response is $$0$$ for $$n < 0$$, the lower limit of the summation changes. Additionally, we are only interested in convolving up to $$N$$ where $$N$$ is the number of $$FFT$$ points.  Thus, the equation changes to the following $$y(n) = \sum_{k=0}^\infty h(k)$$ Once we obtain the step response, we are very close to plotting the impedance profile. Remember, that the $$S_{11}$$ is return loss. We need to convert return loss into impedance $$Z$$. To do so, we follow a simple derivation below. $$S_{11} = \frac{ Z_{in} - Z_o }{Z_{in} + Z_o}$$ Therefore, rearranging the terms gives us an equation to compute $$Z_{in}$$ from the $$S_{11}$$. $$Z_{in} = Z_o \biggr(\frac{1 + S_{11}}{1 - S_{11}}\biggl)$$ Now, let us try to implement this in Python. ## Python implementation As mentioned earlier, the TDR script gives us the impulse response of the DUT. We now compute the step response by adding a few lines in the script. The steps: 1. Create a step waveform (Basically all ones in an array) 2. Convolve it with the impulse response 3. Transform $$S_{11}$$ to $$Z$$ 4. Truncate the step response to $$NFFT$$ points ## The results While testing this script, I created two test scenarios. One, with the open-ended cable. In another scenario, I terminated the cable ends with a $$100\Omega$$. Open-ended cable Cable terminated in 100 Ohm resistance There are two things to observe above. The impulse response peak and the sudden spike in impedance align quite perfectly. The second thing to note here is that the open-ended cable shows a good $$50\Omega$$ impedance until the point of an open circuit. From that point onward, the impedance spikes to very high levels; an indication of open circuit cable. Ideally, it should approach $$\infty$$. In the second image, we are looking at the second scenario, where I terminated the cable in $$100\Omega$$ resistance. Instead of approaching $$\infty$$, the impedance graph settles at $$100\Omega$$. This indicates that our script is working as expected. If any of you have the time or means to do a multi-impedance plot, go ahead. All you need to do is attach different impedances along the length, and we should see a corresponding variation in the graph. The cable terminated with a 100 Ohm resistance The full script is below. If you haven't purchased a NanoVNA yet, go ahead and get one. It's worth it! #### Incoming search terms: If you liked this article, feel free to share it. It will help us a lot! • • • • • • • • • • • • • • • • • • ## One thought on“Cable impedance profile with NanoVNA and TDR script” 1. Neha says: Very nicely done …good job Salil This site uses Akismet to reduce spam. Learn how your comment data is processed.
2019-09-16 22:26:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6250157952308655, "perplexity": 996.1236261135998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572964.47/warc/CC-MAIN-20190916220318-20190917002318-00360.warc.gz"}
https://protonstalk.com/thermal-properties-of-matter/anomalous-expansion-of-water/
# Anomalous Expansion of Water Water is the essential component of any living form on Earth. Therefore, it becomes important to study the nature of water for any sane study of life and organisms. Water is a special molecule that has some peculiar and unique phenomena linked to it. One such unusual phenomenon is the Anomalous Thermal Expansion of Water in the temperature range of 0°C  to 4°C. Index ## What is Anomalous Expansion of Water? Most liquids expand upon heating and contract upon cooling. Water, between a specific temperature range, seems to break this notion. “The Anomalous Expansion of Water is the increase in the volume of water when cooled from 4°C to 0°C (or 39.2°F to 32°F ).” The figure given below shows a plot of the density vs temperature of the water. The density of water increases from 0°C to maximum at a temperature of 4°C. After 4°C, it decreases, similar to any other normal behaviour liquid. Anomalous Expansion happens in the 0°C – 4°C range. In this, the density of water decreases, hence the volume of water increases. It is also clear from the plot that the density of water is maximum at 4°C with a value of  0.9998395 g/cm ~  1g/cm3. ## Reason for Anomalous Expansion of Water Ice has an “open” crystal structure which has a lot of space. Ice is less dense than water. When ice melts at 0°C, the $$H_2O$$ molecules lose this open structure and are more hydrogen bonding. This results in a lesser intermolecular distance between $$H_2O$$ molecules. Hence, density increases from 0°C to 4°C. At 4°C, the density is maximum after 4°C, a rise in temperature results in increased intermolecular distance due to an increase in Kinetic Energy of the molecules. ## Consequences and Applications 1. Preservation of Aquatic Ecosystem Anomalous expansion of water is important for sustaining aquatic life in cold regions or during winter. During cold weather, the top layer of a water body cools first. The temperature of the top layer drops to 4°C. The top layer then becomes denser than other layers descends to the bottom of the water body. This process happens continuously to create a temperature gradient. The bottom most layer remains at 4°C. At this 4°C, aquatic life can thrive. The temperature gradually decreases as the depth decreases. The topmost layer is the coldest. Eventually, the top layer freezes, thereby forming an insulating blanket that prevents further freezing to some extent. If there was no anomalous expansion, the waterworks have frozen altogether. 2. Weathering of Rocks: Water seeps into the cracks and crevices of rocks. During winter, when the temperature falls below 4°C, the water expands, resulting in hydrostatic pressure on the rock. This produces cracks on the rocks and helps in weathering the rock. 3. Pipeline Leakage: Some water pipelines start to leak when the temperature falls below 4°C. This is due to the pressure created by the anomalous expansion of water. ## FAQs What is Anomalous Expansion of Water? The Anomalous Expansion of Water is the increase in the volume of water when cooled from 4°C to 0°C (or 39.2°F to 32°F ). How to prove anomalous expansion of water at home? Fill a plastic bottle fully with water to the brim. Close the mouth of the bottle with a cork. Cool the bottle in a freezer. The cork comes out after some time when the temperature of the water falls. What are some other anomalous expanding liquids? Gallium, Silicon, Germanium, etc., show anomalous expansion similar to water. Scroll to Top
2022-01-27 04:47:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5810026526451111, "perplexity": 1292.0313525779545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00257.warc.gz"}
http://scjv.peters-agrar.de/latex-argmax-below.html
## Latex Argmax Below 2) Optimal objective value. The label in the top right is the correct classification, according to the MNIST data, while in the bottom right is the label output by our ensemble of nets: It's worth looking through these in detail. theta), code: \arg\max_\theta. uniform to generate random samples for each target, and numpy. This puts the annotation in a smaller type size directly above the symbol. All of the structures work properly in Word, it is just in Powerpoint that they do this. Published as a conference paper at ICLR 2018 LEARNING APPROXIMATE INFERENCE NETWORKS FOR STRUCTURED PREDICTION Lifu Tu Kevin Gimpel Toyota Technological Institute at Chicago, Chicago, IL, 60637, USA. It is delimited at both ends by endcaps which provide an air-source from one end and an anchoring point to the lateral surface from the other. Otherwise, just laughing at my own jokes. This is the number of observations used for calculating the statistic. You can follow it by an underscore if you need text below the arg max or arg min functions. Besides, thank you for helping me process the TWINGO surprise. From there, we obtained approximations to their level and power. We investigated feature importance of primary school teachers’ reports on nine aspects of inattentive behaviour, gender and age in predicting future academic achievement. python实现识别手写MNIST数字集的程序我们需要做的第⼀件事情是获取MNIST数据。如果你是⼀个git⽤⼾,那么你能够通过克隆这本书的代码仓库获得数据, 实现我们的⽹络来分类数字gitclone. Gmsh is built around four modules: geometry, mesh, solver and post-processing. A location into which the result is stored. ca) % 97/10/16; revised 99/5/10 % load after article. use_kwargs – Allows the caller to use keyword arguments. Ова страница описује нека напреднија коришћења математике у LaTeXу. They are used in an identical manner to max and min, and can be used either as a function or method. The Comprehensive LATEX Symbol List Scott Pakin ∗ 22 September 2005 Abstract This document lists 3300 symbols and the corresponding LATEX commands that produce them. Introduction. Unlike many other classifiers which assume that, for a given class, there will be some correlation between features, naive Bayes explicitly models the features as conditionally independent given the class. Since the following is valid. Easily share your publications and get them in front of Issuu’s. Below is a list of additional functions from. Apr 26, 2018 · While unsupervised learning is still elusive, researchers have made a lot of progress in semi-supervised learning. Other readers will always be interested in your opinion of the books you've read. Memorization & Generalization; Advanced topics 1 How to make decisions in the presence of uncertainty? There are di erent examples of applications of the Bayes Decision Theory (BDT). Instructors will also want to visit the Additional Instructor Information. Assuming that the LaTeX source is already split into tokens , ,…,. However, I would rather not start by attempting to train an image recognition system from scratch, and prefer to leave this part to researchers who are more experienced in vision and image. Here are a few examples detailing the usage of each available method. FrequencySeries into a FrequencySeries. Your gures must be of proper (large) size with labeled, readable axes. Recognition of Online Handwritten Mathematical Expressions Using Convolutional Neural Networks Catherine Lu Computer Science, Stanford University [email protected] Jan 30, 2019 · It is widely spoken that the Equation Editor of Word (OneNote / Office) is inferior and harder to use than LaTeX. 23 join will default to None (meaning no alignment), but this default will change to 'left' in a future version of pandas. clip_upper (self, Render an object to a LaTeX tabular environment table. The media used for writing are recounted below, but common materials currently include paper and pencil, board and chalk (or dry-erase marker), and electronic media. We can use a binary threshold for making our data binary. The TeX way doesn't work: \mathop{\rm argmax}\limits_x f(x). It isn't possible to change kern. The first condition tests if 'Format' is the name of a field in structure S. LaTeX Code-Snippet for math-mode,amsmath,math-operators. 在使用LaTex进行排版时,一个常见的需求是要把下标放在某个文字或者符号的正下方:LaTex的数学模式下提供了\limits命令,形如expr1\limits_{expr2}^{expr3}中expr 博文 来自: da_kao_la的博客. Bayes Decision Theory Prof. If we wrote the name, sex and department affiliation of each of the 75 individuals on a ping-pong ball, put all 75 balls in a big urn, shook it up, and chose a ball at random, these proportions would represent the probabilities of picking a female Math professor (about. Information for Authors Papers in electronic form are accepted. Series implements argmax and argmin. Each DataFrame (df) has a number of columns, and a number of rows, the length of the DataFrame. Other readers will always be interested in your opinion of the books you've read. Alan Yuille Spring 2014 Outline 1. transforming images to feature vectors I’m keen to explore some challenges in multimodal learning, such as jointly learning visual and textual semantics. A location into which the result is stored. Most of macros were easily converted into % \LaTeX{}-ese. %%% LaTeX class for economics %%% %%% author: Christopher Carroll %%% license: LaTeX Project Public License %%% %%% Modified from style itaxpf by Arne Henningsen. \$\begingroup\$ Generally, as I can see in your data, what you want to remove is much less than what you need to keep. If you have any questions, please ask me and send a comment in the comment section below!. argmax and. Published as a conference paper at ICLR 2018 LEARNING APPROXIMATE INFERENCE NETWORKS FOR STRUCTURED PREDICTION Lifu Tu Kevin Gimpel Toyota Technological Institute at Chicago, Chicago, IL, 60637, USA. I am trying to plot the I-V curve using python because that helps me in understanding PV cells / solar panels (and diodes as a general thing). They return the label of the maximum or minimum, rather than the position. I agree that doesn't look okay, but if LaTeX would put the argument underneath the \max, your linespacing would need to be larger which doesn't look nice overall. In the myCourses Help and Resources review the Basic Introduction to get started. To get it to render properly just wrap it in single dollar signs for inline math (just like in a $\LaTeX$ document) and double dollar signs for display math. Your writing style is awesome, keep doing what you’re doing! And you can look our website about proxy server list. By design, MATLAB ® software can efficiently perform repeated operations on collections of data stored in vectors and matrices. Some of the links below have changed their URL locations over the years. The performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. Input value. edu Abstract There is compelling evidence that corefer-ence prediction would benefit from modeling. putting a label above an arrow. All the reality which will he or she became popular seemed to be nearly all almost certainly why TeX (and afterward concerning, LaTeX) turned as a result trendy within just the particular scientific online community. weights: array_like, optional. Apr 05, 2016 · Machine Learning A Probabilistic Perspective PDF Free by Murphy 1. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them. In the above example, the matrix x is treated as vector formed by stacking columns of the matrix one after another, i. Welcome to the Comprehensive LATEX Symbol List! This document strives to be your primary source of LATEX symbol information: font samples, LATEX commands, packages, usage details, caveats—everything needed to put thousands of different symbols at your disposal. This release supports Python 2. , rows) of Ŷ match the conforming matrix Y (see example below). Even if you don’t know TeX, it’s pretty simple for many things. 2 days ago · I am reading Kevin Murphy's Machine Learning book (MLAPP) and want to know how he got the expression for the Bayes classifier using minimization of the posterior expected loss. They return the label of the maximum or minimum, rather than the position. argmax and. forward_propagation (x [i]) # We only care about our prediction of the "target" words correct_word_predictions = o [np. Since pandas 0. data for fetching options data from Yahoo! Finance. described below, to jump into the equation editor, even if you are simply typing a variable name). One might expect an absolute value command to be defined by amsmath, but it is not. Supported symbols. There are several exceptions, however. Code, Compile, Run and Debug python program online. Solver activation Entered equations can be solved and question marks are supported as unknown variables. png]] Below image is shown 100 pixels tall. AAAI Press Formatting Instructions for Authors Using LaTeX -- A Guide Author AAAI Press Staff, Pater Patel Schneider, Sunil Issar, J. For example, if the correct answer is a vector of length 5, we'll accept vectors of size $5 \times 1$ or $1 \times 5$. Details of LaTeX tokenization can be found in section V. The MIT Press Cambridge, Massachusetts London, England. Use the -n parameter in order to increase the number of characters read by the script. 从最简单的例子出发假定现在有一个数组a=[3,1,2,4,6,1]现在要算数组a中最大数的索引是多少. ifft (self) Compute the one-dimensional discrete inverse Fourier transform of this. The appropriate LaTeX command is \overset{annotation}{symbol}. You should not mindlessly paste raw R output into your writeup with 12 signi cant digits, etc. For Type \asmash \dsmash \hsmash \smash \hphantom \phantom \vphantom For example: For Type Comments [] [\phantom (a \atop b)]. The failing point of a particular algorithm, per se, is. This post focuses on a particular promising category of semi-supervised learning methods that assign proxy labels to unlabelled data, which are used as targets for learning. argmax to pick the vertex. Predictive Analytics using R Dr. GaussianBlur, cv2. please notify me (through my e-mail above) which links are broken so that. Unlike many other classifiers which assume that, for a given class, there will be some correlation between features, naive Bayes explicitly models the features as conditionally independent given the class. For example, if we choose threshold value = 0. 2011年7月19日のMathJax-HTML版 2003年6月5日版のプレインテキスト版がオリジナル. \NeedsTeXFormat{LaTeX2e} \ProvidesClass{worksheet}[2017/04/30 worksheet class] \LoadClassWithOptions{article} %--OPTIONS----- % asy -- for including asymptote code in. * Below image is shown 50 pixels tall. How to write below/above the text in LaTeX? Ask Question Asked 9 years, 11 months ago. Jan 30, 2019 · It is widely spoken that the Equation Editor of Word (OneNote / Office) is inferior and harder to use than LaTeX. \$\begingroup\$ Generally, as I can see in your data, what you want to remove is much less than what you need to keep. As a starting point, you may want to visit our detailed tutorial pages, at: https://gmic. These weak fields can be generated endogenously by populations of neurons [4–7] or through transcranial electrical stimulation [3, 8–10], and they can modify neural activity in various ways [4, 11–15]. style Source code for pandas. bib files, these we can simple enumerate in the \bibliography command as \bibliography{file1,file2} where file1. Sep 15, 2011 · LaTeX has no built-in ”’argmax”’ command. GitHub Gist: instantly share code, notes, and snippets. A LaTeX token refers to a processing unit within a LaTeX sequence to facilitate the design of the formula translator. Dec 13, 2015 · You can do your multiple choice in the below format where the sheet contains minimal information and the questions and candidate answers are on a printed sheet of paper. Visualization The picture below shows an alternative way of looking at the problem. “AMS” means to use the AMS packages, viz. So by definition of partial injective function, I can have unmapped nodes in my left side (for example A5). would take this was below timer resolution, ArgMax took 7 seconds. org/wiki/LaTeX/Mathematics One of the greatest motivating forces for Donald Knuth when he. I am using Convolutional Neural Networks to tackle image recognition. Generate a new FrequencySeries from a LAL FrequencySeries of any type. com rmarkdown 0. Aug 25, 2017 · Visually, a researcher might check how many observations (i. Lecture 6: Lower Bounds and Game Theory I 3 It is a fact that 1 p T XT t=1 Zt i!G i˘N(0;1) where the convergence is in distribution, Gis the binomial distribution, and N(0;1) is the normal distribution with mean 0 and variance 1. Apr 05, 2016 · Machine Learning A Probabilistic Perspective Kevin P. 2 created this file. Mixture Density Networks. diag (v, k=0) [source] ¶ Extract a diagonal or construct a diagonal array. Gmsh is built around four modules: geometry, mesh, solver and post-processing. The code below loads the captions from a text file and places them inside a caption tensor that is a matrix of size numCaptions x maxCaptionLength x charVocabularySize. combine (other, func[, fill_value]). Hi Hans, The \emph{} command is used to emphasize something. 7 for all releases made in 2018, with the last release being designated a long term release with support. corr (other[, method, min_periods]) Compute correlation with other Series, excluding missing values: Series. Two keyword-only arguments are supported: Parameters. Look for "Detexify" in the external links section below. bib contain the references. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. The code below loads the captions from a text file and places them inside a caption tensor that is a matrix of size numCaptions x maxCaptionLength x charVocabularySize. After computing the multiple-shift aware correlation, we can use argmax just like in the first approach to find the best correlation. I have managed to write this piece of code but as I am using it on big data sets it ends up being quite slow. Jan 18, 2017 · No. Documentation. com rmarkdown 0. png} Note: do not worry about the distinction between row vectors and column vectors here. While this is true for a hardcore user, a lot of people spreading the word aren’t really understanding the thing and are just parroting others. I know the formulas have lots of inter-dependent variables (like voltage with temperature and current with irradiance), so I am aware I need somehow to approximate and choose something arbitrarily, but I am a bit lost with constants and units of. K - number of neighbour to use. Integral expression can be added using the \int_{lower}^{upper} command. If you have not used LATEXbefore, you can start quickly by opening an account for the online latex editor service by Overleaf, for instance. Having significantly improved the resolution of real-time-single-cell-GFP-imaging, the. SE that shows how argmin and argmax with limits can be typesetted using the \DeclareMathOperator* command. pdf), Text File (. Bayes Decision Theory Prof. clip_upper (threshold[, axis, inplace]) Return copy of input with values above given value(s) truncated. To continue with your YouTube experience, please fill out the form below. Scribd is the world's largest social reading and publishing site. count ([level]) Return number of non-NA/null observations. Sign in Sign up Instantly share code. So by definition of partial injective function, I can have unmapped nodes in my left side (for example A5). – “Toggle between TeX/LaTeX and MathType equations” (not available on the Mac) – This is a huge shortcut if you know the TeX language. No matter what you’re looking for or where you are in the world, our global marketplace of sellers can help you find unique and affordable options. Hi Hans, The \emph{} command is used to emphasize something. use_kwargs – Allows the caller to use keyword arguments. Cheat Sheet learn more at rmarkdown. Nov 30, 2017 · Today we released a notebook which on the surface introduces the np. These weak fields can be generated endogenously by populations of neurons [4–7] or through transcranial electrical stimulation [3, 8–10], and they can modify neural activity in various ways [4, 11–15]. \documentclass[english]{article. Use it to embed R code and results into slideshows, pdfs, html documents, Word files and more. Return copy of the input with values below given value(s) truncated. Your writing style is awesome, keep doing what you’re doing! And you can look our website about proxy server list. Ohio Supercomputer Center (OSC) has a variety of software applications to support all aspects of scientific research. Review Get ready for myCourses for information on preparing your web browser for myCourses. All programs are written in Fortran 77. Explore the Keras ecosystem 338. I think I should write something like $\arg \max_{\substack{w \\ \phi}} f(w,\phi)$ but this puts the subscript below at the right of \max and I'd like to put those subindexes below and centered on the max word. Supervised machine learning models learn the mapping between the input features (x) and the target values (y). frame structure in R, you have some way to work with them at a faster processing speed in Python. 's book Biological Sequence Analysis (2002). That is, they emulate LaTeX running with the amsmath package, the amsfonts package, and the amssymb package installed (via \usepackage{amsmath,amsfonts,amssymb}). In any template that latex argmax and argmin. Osborne ([email protected] This NASA page says this photo was taken on April 28 2006. from_pycbc (fs[, copy]) Convert a pycbc. Murphy “An astonishing machine learning book: intuitive, full of examples, fun to read but still comprehensive, strong, and deep!. May 17, 2017 · If you have a different approach, please let me know in the comment section below! The implementation. 1、指数和下标可以用^和_后加相应字符来实现。. LivDet 2013 utilized for use of the non-cooperative method without user interference for creating spoof images. Save and restore Save and restore your equation in MyScript Math format. count ([level]) Return number of non-NA/null observations. Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below). Note also the build system changes listed below as they may have subtle effects. I don't know how you come to this conclusion? The method in the linked question is "max" and not "argmax". Dec 28, 2017 · This is how we would do one step with Newton’s method. 5, then the dataset value above it will become 1 and below this will become 0. Each new update generates a cost for the new state, this is compared against the current minimum cost for the currently minimal state. In the above example, the matrix x is treated as vector formed by stacking columns of the matrix one after another, i. We have been receiving a large volume of requests from your network. It can be entered into the LyX mini-buffer. While some presenters may opt for static images or layouts which may seem less than perfect to present equations, however, that may also end up ruining the presentation. Code, Compile, Run and Debug python program online. Series implements argmax and argmin. h: /* * These are the maximum length and maximum number of strings passed to the * execve() system call. Information theory for linguists: a tutorial introduction Information-theoretic Approaches to Linguistics LSA Summer Institute John A Goldsmith The University of Chicago. A high variance means the model pays much attention to small fluctuations in the training data - which directly makes it perform bad on test data. 2) Optimal objective value. Ohio Supercomputer Center (OSC) has a variety of software applications to support all aspects of scientific research. This banner text can have markup. If you find any broken links, I would greatly appreciate it if you would. theta), code: \arg\max_\theta. edu is a platform for academics to share research papers. All gists Back to GitHub. Your writing style is awesome, keep doing what you’re doing! And you can look our website about proxy server list. Use it to embed R code and results into slideshows, pdfs, html documents, Word files and more. ) km hm dam dm cm mm µm ha hl dal dl cl ml µl kg hg dag dg cg mg µg ms µs GHz MHz kHz Hz. The second statement then tests whether the Format field is empty. Systematic adherence to mathematical concepts is a fundamental concept of mathematical notation. For example, the following example illustrates that \sum is one of these elite symbols whereas \Sigma is not. If you work with a computer, having a typing skill is a must. SE that shows how argmin and argmax with limits can be typesetted using the \DeclareMathOperator* command. Jun 20, 2012 · LaTeX - Math - Free download as PDF File (. For more info, see http://www. While there are many packages available for this, I'm going to use glossaries package. png]] Below image is shown 100 pixels tall. edu Abstract We further investigate the problem of recognizing hand-. Scribd is the world's largest social reading and publishing site. The decision has been made to support 2. One very important probability density function is that of a Gaussian random variable, also called a normal random variable. Apr 05, 2016 · Machine Learning A Probabilistic Perspective Kevin P. Dec 24, 2018 · Clearly label axes of plots, table rows/columns, and use descriptive captions for results displayed in this fashion. out: ndarray, None, or tuple of ndarray and None, optional. bilateralFilter. Jan 10, 2018 · Andrej Karpathy LaTex generator Basic NN with single hidden layer All shapes are activations (an activation is a number that has been calculated by a relu, matrix product, etc. count ([level]) Return number of non-NA/null observations. Oct 30, 2011 · When you create a new math field in Lyx, simply type \argmax or \argmin. LaTeX/Mathematics - Wikibooks, open books for an open world http://en. Show your derivations. \NeedsTeXFormat{LaTeX2e} \ProvidesClass{worksheet}[2017/04/30 worksheet class] \LoadClassWithOptions{article} %--OPTIONS----- % asy -- for including asymptote code in. would take this was below timer resolution, ArgMax took 7 seconds. (See code below for example of interruptible optimization that keeps last optimization. The idea is to find a new set of orthogonal bases and ignore those containing negligible variance components. (backslash \ has to be escaped, so in your LaTeX code you have to replace \ with \\). The decision has been made to support 2. argmax and np. What appears to be questionable for me in that code is the fixed learning rate. Abstract Garbage and waste disposal is one of the biggest challenges currently faced by mankind. org/wiki/LaTeX/Mathematics One of the greatest motivating forces for Donald Knuth when he. I have managed to write this piece of code but as I am using it on big data sets it ends up being quite slow. The terminology from AMS-LaTeX. Note, that integral expression may seems a little different in inline and display math mode - in inline mode the integral symbol and the limits are compressed. Fortunately, there's a tool that can greatly simplify the search for the command for a specific symbol. There are posts in TeX. argmax in multi-dimension tensor in the Tensorflow using convolution neural network. % Use this template to write your solutions to COS 423 problem sets \documentclass[11pt]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath, amsfonts, amsthm, amssymb, algor. Machine Learning Fall 2017 Homework 2 Homework must be submitted electronically on Canvas. We assume that n < m and, for simplicity, that there is only a single primary variable; the extension to multiple primary variables simply requires one to include all of them in the model fit occurring in each Step 1 below. This allows the students to scribble on the paper and to keep the Excel sheet short which saves scrolling. commands to write argument of max/min equations. argmax (self[, axis, Trim values below a given threshold. Ова страница описује нека напреднија коришћења математике у LaTeXу. Unlike many other classifiers which assume that, for a given class, there will be some correlation between features, naive Bayes explicitly models the features as conditionally independent given the class. It should look very similar to \cref{eq:lrgrad} or \cref{eq:simplified-lrgrad} below (either form is acceptable for full credit). Online LaTeX editor with autocompletion, highlighting and 400 math symbols. conf etc interface. Data Science In Go: A Cheat Sheet from chewxy. Thanks very much. You can see more examples and info here; Related questions. argmax to pick the vertex. cls % load also amstex package, with leqno option % Close to AER journal style (but not 2 column!!). DDPG has two components: the actor which is the deterministic po. Memorization & Generalization; Advanced topics 1 How to make decisions in the presence of uncertainty? There are di erent examples of applications of the Bayes Decision Theory (BDT). In the previous chapters we showed how you could implement multiclass logistic regression (also called softmax regression) for classifiying images of handwritten digits into the 10 possible categories (from scratch and with gluon). All of the fonts covered herein meet the following criteria: 1. I used it for MNIST and got an accuracy of 99% but on trying it with CIFAR-10 dataset, I can't get it above 15%. A simple way to visualize this is to think of rotating your original set of axes in the hyperspace. Exploring some Python Packages and R packages to move /work with both Python and R without melting your brain or exceeding your project deadline ----- If you liked the data. edu Abstract There is compelling evidence that corefer-ence prediction would benefit from modeling. So you can type \def\exp#1#2{#1^#2} or \renewcommand{\raise}[2]{#1^#2}, select the typed text, and type Ctrl-M to convert it to a math-macro. This introduces a similar effect to tanh(x) or. Visualization The picture below shows an alternative way of looking at the problem. ValueError: attempt to get argmax of an empty sequence The code is processing information from images sent to it from simulator. Currently it seems that the MediaWiki software doesn't support enough LaTeX or TeX to be able to improve it, though. Welcome to the Comprehensive LATEX Symbol List! This document strives to be your primary source of LATEX symbol information: font samples, LATEX commands, packages, usage details, caveats—everything needed to put thousands of different symbols at your disposal. However, adding a subscript to your \max or \min will place the subscript under \max or \min only, not under argmax or argmin as a whole. YOUR TITLE by YOUR NAME A [thesis jdissertation] submitted in partial fulfillment of the requirements for the degree of [Master of XXXX jDoctor of Philosophy] in. Naive Bayes, also known as Naive Bayes Classifiers are classifiers with the assumption that features are statistically independent of one another. amax functions, along with a host of other delights: To launch an interactive version of the notebook in your browser, click on the “launch Binder” button below. The supremum (abbreviated sup; plural suprema) of a subset S of a partially ordered set T is the least element in T that is greater than or equal to all elements of S, if such an element exists. The label in the top right is the correct classification, according to the MNIST data, while in the bottom right is the label output by our ensemble of nets: It's worth looking through these in detail. To suggest that LaTeX insert a page break inside an amsmath environment, you may use the \displaybreak command before the line break. When you later feel that you would benefit from having a standalone LaTeX installation, you can return to this chapter and follow the instructions below. diag¶ numpy. Look at the formulas i provided and the wikipedia page about argmax! – Spen Jan 18 '17 at 20:44. 2 Asymptotically Consistent Algorithms. Shieber School of Engineering and Applied Sciences Harvard University Cambridge, MA, USA fswiseman,srush,[email protected] 在 2012年8月18日星期六UTC-5下午8时18分41秒,Michael Shell写道: It still does not work. In the code below, I use this API to convert our 5-grams words into a (1, 1500) vector and our labels into a (1, 15) label vector and write them out to files for the next stage. Proper waste disposal and recycling is a must in any sustainable community, and in many coastal areas there is. The TeX way doesn't work: \mathop{\rm argmax}\limits_x f(x). If provided, it must have a shape that the inputs broadcast to. amax functions, along with a host of other delights: To launch an interactive version of the notebook in your browser, click on the “launch Binder” button below. would take this was below timer resolution, ArgMax took 7 seconds. frame structure in R, you have some way to work with them at a faster processing speed in Python. Bayes Decision Theory Prof. 请看下面效果: To show the effect of the matrix on surrounding lines inside a paragraph, we put it here: $\begin{pmatrix}\begin{smallmatrix} 1 & 0 \\ 0 & -1 \end{smallmatrix}\end{pmatrix}$ and follow it with enough text to ensure that there is at least one full line below the matrix. For example, if you type $\sqrt2$ and click this icon, you’ll get 2 in your document without opening MathType. This behavior is also seen in around and round. The terminology from AMS-LaTeX. Active 9 years, 11 months ago. Show and explain the steps you take to derive the gradient. Memoization¶ pytools. This requires \usepackage{amsmath} (actually amsopn would be sufficient, it's automatically loaded by amsmath which is recommended for math typesetting anyway). Insert a Preformatted Equation in PowerPoint. For Type \asmash \dsmash \hsmash \smash \hphantom \phantom \vphantom For example: For Type Comments [] [\phantom (a \atop b)]. If its an offset then this will be the time period of each window. Kernel size, I used in all cases were 9. I have a four band raster, and I would like to generate a new raster that has the argmax of the bands for each pixel. Information theory for linguists: a tutorial introduction Information-theoretic Approaches to Linguistics LSA Summer Institute John A Goldsmith The University of Chicago. This introduces a similar effect to tanh(x) or. array: The ExtensionArray of the data backing this Series or Index. LivDet 2013 utilized for use of the non-cooperative method without user interference for creating spoof images. Regardless of the history, typesetting mathematics is one of LaTeX's greatest strengths. n]) in equations is $\sum_{i=1}^n expression$ Note that in inline code this will put lower and upper bound next to the sum sign, not over and under. class DataFrame (object): """All local or remote datasets are encapsulated in this class, which provides a pandas like API to your dataset. 在 2012年8月18日星期六UTC-5下午8时18分41秒,Michael Shell写道: It still does not work. Jul 26, 2019 · Parameters: x: array_like. Gmsh is built around four modules: geometry, mesh, solver and post-processing. The performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. Workflow R Markdown is a format for writing reproducible, dynamic reports with R. out: ndarray, None, or tuple of ndarray and None, optional.
2020-01-23 11:44:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4855850338935852, "perplexity": 2111.632515033753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610004.56/warc/CC-MAIN-20200123101110-20200123130110-00020.warc.gz"}
http://umj.imath.kiev.ua/article/?lang=en&article=7141
2018 Том 70 № 12 # System $G|G^κ|1$ with Batch Service of Calls Abstract For the queuing system G|G κ|1 with batch service of calls, we determine the distributions of the following characteristics: the length of a busy period, the queue length in transient and stationary modes of the queuing system, the total idle time of the queuing system, the output stream of served calls, etc. English version (Springer): Ukrainian Mathematical Journal 54 (2002), no. 4, pp 548-569. Citation Example: Kadankov V. F., Yezhov I. I. System $G|G^κ|1$ with Batch Service of Calls // Ukr. Mat. Zh. - 2002. - 54, № 4. - pp. 447-465. Full text
2019-01-23 20:56:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25443533062934875, "perplexity": 2334.96806317453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584350539.86/warc/CC-MAIN-20190123193004-20190123215004-00196.warc.gz"}
https://ireti.org/saskatoon-star-izqiwd/d7f4a2-calculate-the-percentage-of-nitrogen-in-magnesium-nitride
Magnesium nitride (Mg3N2) has a higher percentage composition of magnesium than magnesium oxide (MgO) does. How writing services help students to achieve success in their life? Chemistry Q&A Library Magnesium and nitrogen react in a composition reaction to produce magnesium nitride: Mg. + N2 -----> M93N2 (unbalanced) When 9.27 grams of nitrogen reacts completely, what mass in grams of magnesium is consumed? The idea here is that you need to use the chemical formula for magnesium nitride ##Mg_3N_2## to calculate the mass of one mole of the compound i.e. Magnesium forms both an oxide and a nitride when burned in air. The only product is magnesium bromide. {\displaystyle {\begin {matrix} {}\\ {\ce { {3Mg}+2NH3-> [ {\ce {700^ {\circ }C}}] {Mg3N2}+3H2}}\\ {}\end {matrix}}} Magnesium nitride is an inorganic compound which is made by magnesium and nitrogen. 24.1 b. find the atomic mass of N Mg3N2 When the products are treated with water, 2.813 g of gaseous ammonia are generated. All rights reserved. i havent been able to figure this one out, can © copyright 2003-2020 Study.com. Create your account. Approx. Calculate the mass percent of magnesium (Mg) in magnesium nitride (Mg3N2)? When magnesium react with nitrogen, magnesium nitride is formed according to the following equation;- 3 Mg + N2 ----> Mg3N2 At room temperature and pressure Mg3N2 is a greenish yellow powder. {/eq}. 5)A) How many grams of magnesium nitride can be made in the reaction of 35.00 g of magnesium and 15.00 g nitrogen? The result is ... a. The molar mass of the compound gives us 92 grams, which is then equivalent to 30 percent and then a massive nitrogen 30% of five. percentN=3*atomicmassMg/totalmass Discuss the formation of magnesium oxide and magnesium nitride when magnesium atoms react with oxygen and nitrogen atoms. % Oxygen = ( 95.94 148.27) = .647 or 64.7%. Nitrogen is in excess. All other trademarks and copyrights are the property of their respective owners. answer! There is only 60.31% of magnesium in magnesium oxide. 1. The product that are formed during the decomposition of magnesium nitride is 3Mg + N2 (answer D). find the atomic mass of Mg Na2O = Sodium Oxide 4. N 0.441 C. 0.509 D. 1.02 E. Not, When heated, lithium reacts with nitrogen to form lithium nitride: 6Li+N2 --> 2Li3N What is the theoretical yield of Li3N in grams when 12.4 g of Li is heated with 33.9g of N2, A 3.214-g sample of magnesium reacts with 8.416g of bromine. Magnesium nitride, Mg3N2, can be produced from magnesium metal, Mg, and nitrogen gas, N2, in the following reaction: 3 Mg + N2 → Mg3N2 In a chemistry laboratory, you calculate the mass of available N2 to be 10.5 grams. Services, Working Scholars® Bringing Tuition-Free College to the Community. Home. Calcium reacts with nitrogen to form calcium nitride. A. {\displaystyle {\begin {matrix} {}\\ {\ce { {3Mg}+N2-> [ {\ce {800^ {\circ }C}}]Mg3N2}}\\ {}\end {matrix}}} or ammonia: 3 Mg + 2 NH 3 → 700 ∘ C Mg 3 N 2 + 3 H 2. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. The formation of magnesium nitride has a significant impact on the percentage composition of magnesium oxide. Homework Help. Your dashboard and recommendations. The percentage of nitrogen in magnesium nitride is.? Mass Percent: Magnesium: Mg: 24.3050: 3: 72.244%: Nitrogen: N: 14.0067: 2: 27.756% ›› The reaction between magnesium and nitrogen is represented by the unbalanced equation below. 2.0 mol … a) Write a balanced chemical equation for this reaction. In the experiment, water was added to convert the small amount of magnesium nitride (Mg, N.) produced into magnesium oxide (MgO). 1. So, if the Mg3N2 was only 90% pure with the impurities containing no Mg, its mass percent would be 0.90*72.2%=65% and it would still have more Mg than MgO per unit of mass. Number of moles magnesium = 6.0 moles. The white solid that remains after this combustion reaction contains mainly magnesium oxide mixed with a little magnesium nitride. Calculate the mass percent of magnesium (Mg) in magnesium nitride (Mg, N.) 2. KF = Potassium Fluoride 3. Nitrogen exists in the air in the form of diatomic molecules, N2. Which organisms are capable of converting gaseous nitrogen in the air into a form that other living organisms can use? What is the pecentage yield of 10.1g Mg reacts with an excess of water abnd 21.0g Mg 9g (OH)2 is recovered? Apparently we have to most again of nitrogen for one wall of compound, and then one more multiple. Become a Study.com member to unlock this The nitrogen content in these compounds is needed for protein synthesis in plants. Get the detailed answer: What is the formula for magnesium nitride? Do the same thing by the last percent of nitrogen. It will completely be consumed 6.0 moles. Nitrogen dioxide is a brown gas. Resolution of normal reaction in a smooth inclined plane; Hey there my name is Pranav, preparing for neet 2017 need study plan for one month for class 11&12. So magnesium nitride is an ionic compound made up of magnesium cations ##Mg^(2+)## and nitride … Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. Our experts can answer your tough homework and study questions. Magnesium nitride (Mg3N2) has a higher percentage composition of magnesium than magnesium oxide (MgO) does. Personalized courses, with or without credits. HOT QUESTIONS. After the reaction, you have produced 35.2 grams Mg3N2. The percentage composition of an element is the ratio of its total mass present in a compound and the total mass or the molecular mass of the compound multiplied by 100. its molar mass. % Nitrogen = ( 28.02 148.27) = .189 or 18.9%. For 3 molesmagnesium (Mg) we need 1 mol nitrogen gas (N2) to react, to produce 1 mol magnesium nitride (Mg3N2) Magnesium is the limiting reactant. 4.0 mol of nitrogen is reacted with 6.0 mol of magnesium. After several days, the color of the gas is the same in both of the jars. b) What is the percentage yield if 10.1g Mg reacts with an excess of water and 21.0g Mg(OH)2, The cover plate is removed from the gas jars shown iin the diagram. (example: a change in appearance of the magnesium, any odour, any smoke) and appearance of the final product in the crucible. Solution 161PStep 1 of 3:The goal of the problem is to find the Step 2: The balanced equation. {/eq}. Calculate the amounts of magnesium nitride and magnesium oxide formed. ›› Magnesium Nitride molecular weight. 3.7 million tough questions answered. SCH3U: Percentage Composition of Magnesium Oxide 1 Date: _____ Name: _____ In this experiment you will use experimental data to determine the percentage composition of magnesium oxide. Total Mass = 148.27 g/mol. nitrogen-fixing bacteria denitrifying bacteria decomposers producers Which step in the nitrogen cycle is. Indicate which reactant is limiting, calculate the mass of calcium nitride formed when 50.0g of calcium reacts with 50.0g of Nitrogen. Calculate the mass percent composition of nitrogen in each of the fertilizers named in the problem. Magnesium Nitride Formula formula, also known as Trimagnesium Dinitride formula or Trimagnesium Nitrogen (-3) Anion formula is explained in this article. Magnesium nitride reacts with water to give ammonia. Study Guides. When magnesium is ignited in air, the magnesium reacts with oxygen and nitrogen. A 21.496-g sample of magnesium is burned in air to form magnesium oxide and magnesium nitride. If there is more magnesium nitride in the mixture, then the percentage of magnesium will also be higher. This reaction emits ammonia gas as a product. Sciences, Culinary Arts and Personal N 2 atoms x 14.01 = 28.02. 0.882 B. The formula for magnesium nitride is {eq}Mg_{3}N_{2} O 6 atoms x 15.99 = 95.94. (a) A diagram of a nitrogen atom is shown below. Suppose this step was skipped so that the solid magnesium Mg3N2 = Magnesium Nitride 2. (2 pts) 3Mg(s) + N2(g) MgaN_(s) 5)B) If 11.5 g of product are produced, what is the percent yield? So the picture shows this, Magnesium burns in air to produce magnesium oxide MgO and magnesium nitride Mg3N2. Magnesium nitride is formed in the reaction of magnesium metal and nitrogen gas. Like Good. Mg 1 atom x 24.31 = 24.31. $$28%$$ Explanation: For $$Mg_3N_2$$. The chemical or molecular formula of Magnesium Nitride Formula is Mg 3 N 2. Don't. 3Mg + N2 → Mg3N2. chemistry. $$%N="mass of nitrogen"/"molar mass of magnesium nitride"xx100%$$ $$=(2xx14.01*g*mol^-1)/(100.95*g*mol^-1)xx100%=27.8%$$ Data Processing and Presentation: 1. where total mass= 3*atomicmassM+2*atomicmassN, magnesium powder reacts with steam to form magnesium hydroxide and hydrogen gas. A student observes unreacted magnesium remaining in the crucible. 505 views Calculate the mass of magnesium, the mass of magnesium oxide, and then calculate the percentage composition, by mass, of magnesium in magnesium oxide: % Mg in MgO % O in MgO 2. -Sorry, I've completely forgot how to do. What is the percentage of nitrogen in N2O? If 1.934g of magnesium is left unreacted, how much magnesium bromide is formed? How many moles of magnesium nitride form when 1.0 mole of magnesium reacts completely? gram formula mass.wmv - YouTube. How much excess reactant do you have? Molar mass of Mg3N2 = 100.9284 g/mol. Step 3: Calculate the limiting reactant. Calculate the mass of magnesium oxide produced: Note: we will use the "constant mass" of the crucible + lid + magnesium oxide in the results table and disregard earlier, lighter masses which indicate that the reaction had not yet gone to completion. 0.197g of magnesium is burned in air 2Mg + O2 ---> 2MgO However, some of the magnesium reacts with nitrogen in the air to form Magnesium nitride instead : 3Mg + N2 --->Mg3N2 So you have a mixture of MgO and Mg3N2 weighing Yeah. If 24g Mg is used the percentage yield is, Magnesium powder reacts with steam to form magnesium hydroxide and hydrogen gas. You can view more similar questions or ask a new question. 24.3050 * 3 /100.93. It tells us about the complete elemental composition of the chemical compound in terms of mass. Which fertilizer has the highest nitrogen content? The following are the numerical values from the stoichiometry calculations: 56 g of magnesium, Mg would have formed 78 g of magnesium nitride, Mg3N2 35 g of nitrogen, N2, would have formed 130 g of magnesium nitride, Mg3N2 What is the percentage of nitrogen in magnesium nitride? 0.197g of magnesium is burned in air 2Mg + O2 ---> 2MgO However, some of the magnesium reacts with nitrogen in the air to form Magnesium nitride instead : 3Mg + N2 --->Mg3N2 So you have a mixture of MgO and Mg3N2 weighing . What explains this change? So, the total mass of nitrogen = {eq}2 \times 14.00 \, u = 28.00 \, u You would need to know the amount of each of the impurities containing Mg and add their mass percent times the purity weight fraction. Very quick! Booster Classes. What is the percentage of nitrogen in N2O? The percentage composition of an element is the ratio of its total mass present in a compound and the total mass or the molecular mass of the compound multiplied by 100. Select one: а. 100 = 72.24% is mass of magnesium in it. Switch to. % Magnesium = ( 24.31 148.27) = .164 or 16.4 %. 56 g of magnesium, Mg, reacted with 35 g of nitrogen, N2, resulting in the formation of 33 g of magnesium nitride, Mg3N2. 2 Magnesium is a reactive metal that combines with both oxygen and nitrogen when burnt in air. What volume of ammonia gas at 24 degrees C and 753 mmHg will be produced from 4.56 g of, When magnesium is heated in air, it reacts with oxygen to form magnesium oxide. 2Mg + O2 --> 2MgO If the mass of the magnesium increases by 0.335 g, how many grams of magnesium reacted? By passing dry nitrogen over heated magnesium: 3 Mg + N 2 → 800 ∘ C Mg 3 N 2. 8.04 c. 0.92 d. 13.9 e. 16.1 How many grams of excess reactant remain after the reaction? Limiting Reactants & Calculating Excess Reactants, Mole-to-Mole Ratios and Calculations of a Chemical Equation, The Kinetic Molecular Theory: Properties of Gases, Limiting Reactant: Definition, Formula & Examples, Collision Theory: Definition & Significance, Dalton's Law of Partial Pressures: Calculating Partial & Total Pressures, Calculating Reaction Yield and Percentage Yield from a Limiting Reactant, The Activity Series: Predicting Products of Single Displacement Reactions, Boyle's Law: Gas Pressure and Volume Relationship, Rate of a Chemical Reaction: Modifying Factors, The pH Scale: Calculating the pH of a Solution, Heat of Fusion & Heat of Vaporization: Definitions & Equations, Gay-Lussac's Law: Gas Pressure and Temperature Relationship, Lewis Structures: Single, Double & Triple Bonds, Calculating Molarity and Molality Concentration, The Quantum Mechanical Model: Definition & Overview, General Chemistry Syllabus Resource & Lesson Plans, Organic & Inorganic Compounds Study Guide, Praxis Chemistry (5245): Practice & Study Guide, Science 102: Principles of Physical Science, DSST Principles of Physical Science: Study Guide & Test Prep, Principles of Physical Science: Certificate Program, High School Physical Science: Help and Review, Glencoe Chemistry - Matter And Change: Online Textbook Help, Biological and Biomedical Taking just three atoms of Mg and two atom of N, you will get a mass of 24.3050*3 + 14.0067*2 = 100.93 grams, for the unit that represents the minimum amount of magnesium nitride. It is an inorganic compound of magnesium and nitrogen consisting of 3 atoms of Magnesium and two atoms of nitrogen. Now, here, Dia, nitrogen touch oxide. Magnesium nitride is formed in the reaction of magnesium metal with nitrogen gas in this reaction: 3 Mg(s) + N2(g) --> Mg3N2(s) How many grams of product are formed from 2.0 mol of N2 (g) and 8.0 mol of Mg(s)? ( 28.02 148.27 ) =.164 or 16.4 % can answer your homework... Gaseous ammonia are generated living organisms can use, and then one more multiple formula of magnesium and atoms... Impurities containing Mg and add their mass percent composition of magnesium will be... Produce magnesium oxide formed out, can 1 chemical or molecular formula of magnesium ( Mg ) in magnesium (! When burned in air to produce magnesium oxide ( MgO ) does powder. Magnesium ( Mg, N. ) 2 is recovered, also known as Trimagnesium Dinitride formula or nitrogen... Experts can answer your tough homework and study questions bromide is formed property of their respective owners now,,! Again of nitrogen that the solid magnesium magnesium nitride has a higher composition! + N2 ( answer D ) mixed with a little magnesium nitride Mg3N2 100 = 72.24 is. Answer D ) that the solid magnesium magnesium nitride is { eq } Mg_ { 3 N_... Copyrights are the property of their respective owners is the percentage of nitrogen in each of the gas the... So that the solid magnesium magnesium nitride form when 1.0 mole of oxide! A balanced chemical equation for this reaction, calculate the mass percent of nitride. Nitrogen over heated magnesium: 3 Mg + N 2 → 800 ∘ C Mg 3 N 2 access this..., and then one more multiple of the chemical compound in terms of mass when! We have to most again of nitrogen in the nitrogen content in these compounds needed!, you have produced 35.2 grams Mg3N2 between magnesium and nitrogen nitride in the form of diatomic molecules N2!, nitrogen touch oxide } { /eq } Trimagnesium Dinitride formula or nitrogen. Other living organisms can use synthesis in plants that the solid magnesium magnesium nitride in terms of mass to again! One more multiple mass of calcium reacts with steam to form magnesium hydroxide and gas... G of gaseous ammonia are generated =.647 or 64.7 % $Explanation for! In each of the gas is the pecentage yield of 10.1g Mg reacts with 50.0g of nitrogen is by! Mol of nitrogen in each of the gas is the formula for magnesium nitride ( Mg3N2 ) the.$ 28 % 28 % Explanation: for 28 % Mg_3N_2 $. Burns in air to form magnesium oxide how to do ( MgO ) does a 21.496-g sample of magnesium magnesium! Indicate which reactant is limiting, calculate the mass percent of magnesium and... Reacted with 6.0 mol of nitrogen in it a nitrogen atom is below. Oxygen and nitrogen in air to produce magnesium oxide tough homework and study questions when! If 1.934g of magnesium reacted mass percent of magnesium oxide and a nitride when magnesium atoms react with oxygen nitrogen! Homework and study questions tough homework and study questions a higher percentage composition magnesium! Burned in air, the magnesium reacts with 50.0g of nitrogen and two atoms of nitrogen ) = or. Is. ( MgO ) does white solid that remains after this combustion contains. C Mg 3 N 2 much magnesium bromide is formed is more magnesium calculate the percentage of nitrogen in magnesium nitride... Magnesium remaining in the air in the nitrogen content in these compounds is needed protein. One more multiple air, the color of the jars a form that other living organisms can use is inorganic... Has a significant impact on the percentage yield is, magnesium powder reacts an. Of gaseous ammonia are generated % nitrogen = ( 95.94 148.27 ) =.189 18.9... Grams of magnesium nitride Get the detailed answer: what is the percentage of nitrogen for wall. This video and our entire Q & a library the detailed answer: what is the pecentage of... Protein synthesis in plants access to this video and our entire calculate the percentage of nitrogen in magnesium nitride a... Content in these compounds is needed for protein synthesis in plants magnesium in it the problem formula Trimagnesium. Equation below nitride form when 1.0 mole of magnesium oxide mixed with a little magnesium nitride in the of! 21.496-G sample of magnesium than magnesium oxide and a nitride when magnesium is burned in to. Shows this, magnesium powder reacts with 50.0g of nitrogen in magnesium nitride Mg3N2 their respective owners,! As Trimagnesium Dinitride formula or Trimagnesium calculate the percentage of nitrogen in magnesium nitride ( -3 ) Anion formula is explained in this.. Molecules, N2 skipped so that the calculate the percentage of nitrogen in magnesium nitride magnesium magnesium nitride Mg3N2 was so... Calcium reacts with oxygen and nitrogen which organisms are capable of converting gaseous nitrogen in magnesium nitride a. Nitrogen atom is shown below new question several days, the magnesium increases by g! Is only 60.31 % of magnesium nitride formula is Mg 3 N 2 → 800 ∘ C Mg 3 2... Remaining in the nitrogen content in these compounds is needed for protein synthesis in plants 60.31 % of magnesium two. Calculate the mass percent times the purity weight fraction Mg 9g ( OH ) 2 observes. Mixed with a little magnesium nitride is. atom is shown below and nitrogen percentage is. 6.0 mol of magnesium and nitrogen is represented by the unbalanced equation below.189 or 18.9 % Explanation:$... Little magnesium nitride ( Mg3N2 ) has a higher percentage composition of magnesium oxide formed, burns. Help students to achieve success in their life 1 atom calculate the percentage of nitrogen in magnesium nitride 24.31 =.! Complete elemental composition of the chemical or molecular formula of magnesium and nitrogen owners... Magnesium burns in air, the color of the magnesium increases by 0.335 g how... Get the detailed answer: what is the formula for magnesium nitride Mg., i 've completely forgot how to do which reactant is limiting, calculate the percent... Now, here, Dia, nitrogen touch oxide the chemical or molecular formula of magnesium is in! 800 ∘ C Mg 3 N 2 excess of water abnd 21.0g Mg 9g ( )! In each of the impurities containing Mg and add their mass percent of nitrogen reacted! ) does little magnesium nitride earn Transferable Credit & Get your Degree, Get to... Composition of the fertilizers named in the nitrogen content in these compounds is needed for protein synthesis in plants (! When 50.0g of calcium reacts with an excess of water abnd 21.0g Mg 9g OH! The formation of magnesium during the decomposition of magnesium oxide ( MgO ) does ) a diagram of a atom... This, magnesium burns in air mixture, then the percentage of magnesium nitride and magnesium and... -Sorry, i 've completely forgot how to do, the magnesium increases by 0.335 g, how magnesium... Excess of water abnd 21.0g Mg 9g ( OH ) 2 calculate the percentage of nitrogen in magnesium nitride recovered Dinitride formula or Trimagnesium (... The decomposition of magnesium than magnesium oxide formed i havent been able to figure one. To figure this one out, can 1 percentage yield is, magnesium burns in air to produce oxide! Here, Dia, nitrogen touch oxide molecules, N2 in plants grams. Mg3N2 ) has a significant impact on the percentage yield is, magnesium burns in air to produce magnesium MgO... Nitrogen consisting of 3 atoms of magnesium in magnesium nitride and magnesium nitride is an inorganic compound which is by! % of magnesium one out, can 1 the mass percent of nitrogen ( )... Gas is the percentage of nitrogen for one wall of compound, and one! Magnesium increases by 0.335 g, how much magnesium bromide is formed an... Or 18.9 % of excess reactant remain after the reaction between magnesium and nitrogen the chemical molecular. Is recovered picture shows this, magnesium powder reacts with an excess of water abnd 21.0g 9g. Remain after the reaction, you have produced 35.2 grams Mg3N2 consisting of 3 of..., then the percentage of nitrogen is reacted with 6.0 mol of nitrogen in the air into form... ) in magnesium nitride $28 %$ $Mg_3N_2$ $Mg_3N_2$ $Mg_3N_2$ $%... ) Anion formula is explained in this article left unreacted, how many grams of excess reactant remain the. { eq } Mg_ { 3 } N_ { 2 } { /eq } same in of. =.647 or 64.7 % 1.0 mole of magnesium will also be.... Of diatomic molecules, N2 of a nitrogen atom is shown below is 3Mg + (. 3Mg + N2 ( answer D ) the nitrogen content in these compounds is needed for protein synthesis plants. 2Mgo if the mass percent composition of magnesium you would need to the. Excess of water abnd 21.0g Mg 9g ( OH ) 2 is recovered 24.31 148.27 =! Is. Dia, nitrogen touch oxide ) in magnesium nitride formula is explained in this article of respective. These compounds is needed for protein synthesis in plants to most again nitrogen... Color of the gas is the same thing by the unbalanced equation below Anion formula is Mg 3 2! Air into a form that other living organisms can use ( 24.31 148.27 ) = or... %$ $Explanation: for$ $Mg_3N_2$ $28$! Impurities containing Mg and calculate the percentage of nitrogen in magnesium nitride their mass percent times the purity weight fraction homework. Several days, the calculate the percentage of nitrogen in magnesium nitride of the jars oxide MgO and magnesium nitride ( Mg, )... After several days, the color of the impurities containing Mg and add their mass percent of magnesium nitride magnesium. Bacteria decomposers calculate the percentage of nitrogen in magnesium nitride which step in the problem magnesium in magnesium nitride Mg3N2 60.31! Atoms react with oxygen and nitrogen again of nitrogen is reacted with 6.0 mol of nitrogen is represented by unbalanced. Nitride and magnesium nitride Mg3N2 magnesium increases by 0.335 g, how much magnesium is...
2021-09-19 08:32:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6008962988853455, "perplexity": 5944.219321922346}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056752.16/warc/CC-MAIN-20210919065755-20210919095755-00084.warc.gz"}
https://mathoverflow.net/questions/285279/can-the-methods-of-classical-algebraic-geometry-be-made-rigorous-with-a-syntheti
# Can the methods of classical algebraic geometry be made rigorous with a synthetic approach? There are approaches to real analysis that use an axiomatization of nilpotent infinitesimals to enable rigorous synthetic reasoning about infinitesimals, which is arguably closer to the reasoning employed by mathematicians prior to the arithmetization of analysis. While I am far from an expert, it seems that the conventional narrative is that increasing doubts about the validity of some of the results of classical algebraic geometry led to a similar arithmetization of algebraic geometry by Zariski and Weil. The gap between the resulting methods and the underlying geometric reasoning seems a bit wider than the gap between $\epsilon$-$\delta$ arguments and infinitesimals, although that is entirely personal opinion and may only be due to my familiarity with the latter. Is it possible to do algebraic geometry in a synthetic manner that enables rigorous reasoning but is closer to the style of argument employed by classical algebraic geometers? • The "schemes as functors from the category of algebras" approach might be doing exactly that. But has it succeeded to reprove the important theorems? Nov 5, 2017 at 3:50 • Arithmetization of algebraic geometry begins not with Zariski and Weil, but with Descartes. Proving certain thing synthetically can be possible in principle but prohibitively complicated without coordinates. Nov 5, 2017 at 13:20 • @AlexandreEremenko I meant synthetic in a more general sense, e.g. synthetic differential geometry, which still features coordinates. Nov 5, 2017 at 17:44 I am not sure what you would call synthetic algebraic geometry. Nevertheless, using some methods which are somtimes called synthetic, Fyodor Zak magnificently reproves and improves many results claimed by italian geometers. In particular his classification of Severi varieties (see : http://mathecon.cemi.rssi.ru/zak/files/Zak_TSAV.pdf) could certainly be considered as a masterpiece of modern synthetic projective geometry. • At about which page does the 141-pages-long paper enter the "synthetic" point of view? I've had a look at the beginning and it seems pretty much in the usual point-set style of modern classical geometry (I mean, complex varieties inside projective space, not the Grothendieck generality). Oct 15, 2018 at 20:15 • depends on what you call synthetic. I would consider pages 37 to 47 to be almost completely synthtetic. And then page 70 up to the end also to be also almost completely synthetic. Oct 16, 2018 at 10:27 Is it possible to do algebraic geometry in a synthetic manner that enables rigorous reasoning but is closer to the style of argument employed by classical algebraic geometers? I sure hope so. You can have a look at notes of mine which develop the basics of a synthetic account of algebraic geometry. Especially relevant is Section 20, which presents a couple of case studies, including computing the cohomology of Serre's twisting sheaves. To give a short teaser: Synthetically, we can define the projective space $$\mathbb{P}(V)$$ associated to a vector space $$V$$ simply as the set of one-dimensional subspaces of $$V$$. The twisting "sheaf" $$\mathcal{O}(-1)$$ is then simply the family $$(\ell)_{\ell \in \mathbb{P}(V)}$$ of vector spaces. Its dual sheaf is the family $$(\ell^\vee)_{\ell \in \mathbb{P}(V)}$$. The scheme structure is automatically taken caren of. However there is still much more to be done. The most pressing concerns are maybe: • There should be an easy and intuitive synthetic description of proper morphisms. Right now we do have a synthetic description, but it's very close the usual non-synthetic description and doesn't exploit the unique possibilites of the synthetic context. • We have to develop a synthetic account of cohomology, derived categories and intersection theory.
2023-02-05 01:55:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7209619283676147, "perplexity": 324.69250665082814}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00540.warc.gz"}
https://www.math4wisdom.com/wiki/Exposition/QuantumPhysics
Contact Andrius Kulikauskas ms@ms.lt +370 607 27 665 Eičiūnų km, Alytaus raj, Lithuania Support me! Patreon... in September? Paypal to ms@ms.lt Bookshelf Thank you! Upload It seems that if one is working from the point of view of getting beauty into one's equation, and if one has really a sound insight, one is on a sure line of progress. - P.A.M. Dirac My insights The ways of figuring things out in physics. The problem of measurement. Choice frameworks. Duality of counting forwards and backwards. Combinatorial interpretation of continuous functions, especially of orthogonal polynomials and solutions to Schroedinger's equation. Duality of the "mother function" {$e^x$} which equals its own derivative. Šis puslapis paskutinį kartą keistas February 26, 2021, at 11:38 AM
2021-09-17 15:30:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5932376384735107, "perplexity": 4186.778896393625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00550.warc.gz"}
https://cs.stackexchange.com/tags/computational-geometry/new
# Tag Info 1 Here is a simpler solution if points lay only on the axis running in time $O(n^2)$. scroll down for a solution for the general case for any set of points in the plane. Let us distinguish three case of triangles. the first is when the origin is a point of the triangle (and hence it must be the right angle). The second case is where two points lay on the $x$ ... 1 Addition of vectors is commutative, so the order of moves is irrelevant. For example: $(0,2) + (3,1) + (2,-2) = (0,2) + (2,-2) + (3,1) = (3,1) + (0,2) + (2,-2) = (5,1)$ So a simple brute force method is to calculate the distance traveled for all subsets of the set of $n$ possible moves. There are $2^n$ such subsets, but you don't need to consider the empty ... 1 I would suggest that you don't calculate the distance between all pairs of points, but only between pairs of points that are nearest neighbors. You could store the points in some nearest-neighbor data structure, e.g., an octree, or even a fixed partitioning of space. Some data structures might allow for relatively rapid updates, so you could even update ... 2 If you choose a formula with parameter $t$ you'll get the $P_x$ value as a result of a number of operations - additions, multiplications and one division: $$t = \frac{(x_1-x_3)(y_3-y_4)-(y_1-y_3)(x_3-x_4)}{(x_1-x_2)(y_3-y_4)-(y_1-y_2)(x_3-x_4)}$$ $$P_x = x_1 + t(x_2-x_1)$$ The number $x_2=72057594037954921$ contains 17 significant decimal digits - so it ... Top 50 recent answers are included
2020-02-20 18:48:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7571021318435669, "perplexity": 207.38177092761885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145260.40/warc/CC-MAIN-20200220162309-20200220192309-00101.warc.gz"}
https://www.numerade.com/questions/the-following-diagram-shows-the-combination-reaction-between-hydrogen-mathrmh_2-and-carbon-monoxide-/
Sign up for our free STEM online summer camps starting June 1st!View Summer Courses ### The following diagram represents the collection o… 00:58 Auburn University Main Campus Need more help? Fill out this quick form to get professional live tutoring. Get live tutoring Problem 2 The following diagram shows the combination reaction between hydrogen, $\mathrm{H}_{2},$ and carbon monoxide, $\mathrm{CO},$ to produce methanol, $\mathrm{CH}_{3} \mathrm{OH}$ (white spheres are $\mathrm{H},$ black spheres are $\mathrm{C},$ red spheres are O). The correct number of CO molecules involved in this reaction is not shown. [ Section 3.1$]$ (a) Determine the number of CO molecules that should be shown in the left (reactants) box. (b) Write a balanced chemical equation for the reaction. a) four CO molecules b) $4 \mathrm{CO}+8 \mathrm{H}_{2} \longrightarrow 4 \mathrm{CH}_{3} \mathrm{OH}$ c) $\mathrm{CO}+2 \mathrm{H}_{2} \longrightarrow \mathrm{CH}_{3} \mathrm{OH}$ d) See explanstion for solution. ## Discussion You must be signed in to discuss. ## Video Transcript Okay, So in this problem, we have to find out how many C o molecules should be placed here and here we can see. And the white fares shown here are high treason. That means we have a tight dress and molecule here in total. And we have four for me tunnel, which is produced from the reaction between carbon monoxide and hydrogen. So for me, tunnel is being produced here from the creation ofthe unknown number off carbon monoxide. And eight molik yourself hydrogen. So from here we have to find out how many molecules of carbon monoxide should be placed here. So to do that what we'LL do with first right down the reaction equation, which is carbon monoxide, will react to it. Hydrogen gas, the farm in this series three wage. So if we balance the equation first Well, you say that are so says we don't have any metal here. First, real balance or non metallic is carbon. So he had one carbon on its site. That this car was already balanced. Now a man is the head Rosen. We have to hundreds in here and four hydrogen in the product side. That means we have to multiply. This had risen by two tow have four hundred. And in the reactor site. Now we have one carbon and four hydrogen on the reactor inside and well, seeing the burdock site now we have to balance the oxygen and we can see that we have one oxygen on each side. That miss oxygen number is also balanced. This's the balance equation for the relation of carbon monoxide and hydrogen. Now that means one carbon monoxide will react to it. True hunters and monocle to form one molecule of them eternal. So if we have for me eternal proposed, that means we have to go multiply the reactor's side. Before also that missile we will require for carbon monoxide. And four times two, our eight had risen to have form eternal from this reaction. So and here you can. If you look at the diagram, we could see that we have for me eternal here. And eight hundreds in here. That man's we need four carbon monoxide toe complete this diagram, and in the diagram, there should be four carbon monoxide. Monica lt's in the reactor site and in the part you have to write down the balanced equation for destruction. And this's the balanced equation for carbon monoxide. Plus eight had risen equal toe four Ah, see three ways or maternal and ah, here is our balance situation for destruction.
2020-05-30 15:03:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5835847854614258, "perplexity": 1639.5594119091875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409337.38/warc/CC-MAIN-20200530133926-20200530163926-00029.warc.gz"}
https://www.semanticscholar.org/paper/Proof-Complexity-Meets-Algebra-Atserias-Ochremiak/4be1ae2d9e62afd75532bbfa0b47a207129f9612
# Proof Complexity Meets Algebra @article{Atserias2017ProofCM, title={Proof Complexity Meets Algebra}, author={Albert Atserias and Joanna Ochremiak}, journal={ArXiv}, year={2017}, volume={abs/1711.07320} } • Published 20 November 2017 • Mathematics, Computer Science • ArXiv We analyse how the standard reductions between constraint satisfaction problems affect their proof complexity. We show that, for the most studied propositional, algebraic, and semi-algebraic proof systems, the classical constructions of pp-interpretability, homomorphic equivalence and addition of constants to a core preserve the proof complexity of the CSP. As a result, for those proof systems, the classes of constraint languages for which small unsatisfiability certificates exist can be… Expand 12 Citations Proof Complexity Meets Algebra • Computer Science, Mathematics • ACM Trans. Comput. Log. • 2019 It is shown that, for the most studied propositional, algebraic, and semialgebraic proof systems, the classical constructions of pp-interpretability, homomorphic equivalence, and addition of constants to a core preserve the proof complexity of the CSP. Expand A Finite-Model-Theoretic View on Propositional Proof Complexity • Mathematics, Computer Science • ArXiv • 2018 This work shows that the power of several propositional proof systems, such as Horn resolution, bounded-width resolution, and the polynomial calculus of bounded degree, can be characterised in a precise sense by variants of fixed-point logics that are of fundamental importance in descriptive complexity theory. Expand Chapter 7. Proof Complexity and SAT Solving • Computer Science • 2021 CNF SAT serves as the canonical hard decision problem, and is frequently conjectured to require exponential time to solve, in contrast, for practical theorem proving, CNF SAT is the core method for encoding and solving problems. Expand Promise Constraint Satisfaction and Width • Computer Science • ArXiv • 2021 The main technical finding is that the template of every PCSP that is solvable in bounded width satisfies a certain structural condition implying that its algebraic closure-properties include weak near unanimity polymorphisms of all large arities. Expand Sailing Routes in the World of Computation • Computer Science • Lecture Notes in Computer Science • 2018 The tutorial focuses on computably enumerable (c.e.) structures, a class that properly extends the class of all computable structures and the interplay between important constructions, concepts, and results in computability, universal algebra, and algebra. Expand GROUP, GRAPHS, ALGORITHMS: THE GRAPH ISOMORPHISM PROBLEM • L. Babai • Computer Science • Proceedings of the International Congress of Mathematicians (ICM 2018) • 2019 Graph Isomorphism (GI) is one of a small number of natural algorithmic problems with unsettled complexity status in the P /NP theory: not expected to be NP-complete, yet not known to be solvable inExpand Algorithm Analysis Through Proof Complexity This work surveys the proof complexity literature that adopts this approach relative to two $$\mathsf {NP}$$-problems: k-clique and 3-coloring. Expand The limits of SDP relaxations for general-valued CSPs • Computer Science, Mathematics • 2017 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS) • 2017 It has been shown that for a general-valued constraint language Γ the following statements are equivalent: (1) any instance of VCSP(Γ) can be solved to optimality using a constant level of theExpand Nullstellensatz size-degree trade-offs from reversible pebbling • Computer Science, Mathematics • Computational Complexity Conference • 2019 We establish an exactly tight relation between reversible pebblings of graphs and Nullstellensatz refutations of pebbling formulas, showing that a graph G can be reversibly pebbled in time t andExpand H-coloring Dichotomy in Proof Complexity The $\mathcal{H}$-coloring problem can be considered as an example of the computational problem from a huge class of the constraint satisfaction problems (CSP): an $\mathcal{H}$-coloring of a graphExpand #### References SHOWING 1-10 OF 56 REFERENCES Proof Complexity Meets Algebra • Computer Science, Mathematics • ACM Trans. Comput. Log. • 2019 It is shown that, for the most studied propositional, algebraic, and semialgebraic proof systems, the classical constructions of pp-interpretability, homomorphic equivalence, and addition of constants to a core preserve the proof complexity of the CSP. Expand Complexity of Semi-algebraic Proofs • Computer Science, Mathematics • STACS • 2001 This work constructs polynomial-size bounded degree LSd proofs of the clique-coloring tautologies, the symmetric knapsack problem, and Tseitin's tautology, and proves lower bounds on Lovasz-Schrijver ranks, demonstrating, in particular, their rather limited applicability for proof complexity. Expand A Note on Semi-Algebraic Proofs and Gaussian Elimination over Prime Fields It is shown that unsatisfiable systems of linear equations with a constant number of variables per equation over prime finite fields have polynomial-size constant-degree semi-algebraic proofs of unsatisfiability, and points out that their method is more general and can be thought of as simulating Gaussian elimination. Expand The wonderland of reflections • Mathematics, Computer Science • ArXiv • 2015 A new elegant dichotomy conjecture for the CSPs of reducts of finitely bounded homogeneous structures is formulated and a close connection between h1 clone homomorphisms and the notion of compatibility with projections used in the study of the lattice of interpretability types of varieties is revealed. Expand A Proof of CSP Dichotomy Conjecture • Dmitriy Zhuk • Computer Science, Mathematics • 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) • 2017 An algorithm is presented that solves Constraint Satisfaction Problem in polynomial time for constraint languages having a weak near unanimity polymorphism, which proves the remaining part of the conjecture. Expand Classifying the Complexity of Constraints Using Finite Algebras • Mathematics, Computer Science • SIAM J. Comput. • 2005 It is shown that any set of relations used to specify the allowed forms of constraints can be associated with a finite universal algebra and how the computational complexity of the corresponding constraint satisfaction problem is connected to the properties of this algebra is explored. Expand The Relation between Polynomial Calculus, Sherali-Adams, and Sum-of-Squares Proofs This work considers polynomial calculus, which is a dynamic algebraic proof system that models Gröbner basis computations and shows that sum-of-squares simulates polynometric calculus, and that Sherali-Adams refutations of degree Ω( √ n/ log n) and exponential size are not required. Expand Constraint Propagation as a Proof System • Computer Science, Mathematics • CP • 2004 It is shown that refutations by ODBBs polynomially simulate resolution and can be exponentially stronger, and introduces new proof systems, based on representation classes, that have not been considered up to this point. Expand Using the Groebner basis algorithm to find proofs of unsatisfiability • Mathematics, Computer Science • STOC '96 • 1996 It is shown that the Groebner system polynomially simulates Horn clause resolution, quasi-polynomially simulating tree-like resolution, and weakly exponentially simulates resolution will have better than worst-case behaviour on the same classes of inputs that resolution does. Expand Algebraic proof systems over formulas • Computer Science, Mathematics • Electron. Colloquium Comput. Complex. • 2001 It is proved that F-NS (and hence F-PC) polynomially simulates Frege systems, and that the constant-depth version of F- PC over finite field polynomebly simulates constant- depthFrege systems with modular counting. Expand
2021-12-01 07:49:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.745193362236023, "perplexity": 2939.1507317727305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00571.warc.gz"}
https://math.stackexchange.com/questions/2144665/perimeter-of-ellipse/4259832#4259832
# perimeter of ellipse As it is well-known, there is no formula for expression of perimeter of the ellipse $(\frac{x}{a})^2+(\frac{y}{b})^2=1$, as an elementary function of $a$ and $b$. I am interested to find an exact formulation of this fact, and its proof. Especially, could one impose conditions on $a$ and $b$, such that an elementary function expression of perimeter is available? (Loosely speaking, similar to Galois theory which discusses about solvability of polynomial equations). Is there a purely algebraic formulation of the problem? • You should probably read a bit about elliptic integrals. Feb 14 '17 at 21:43 • I want a prove that shows that an elliptic integral is not in general expressible by means of elementary functions. Are there any algebraic proofs for example? @IttayWeiss – XIE Jan 17 '18 at 13:55
2021-11-29 15:57:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8258249163627625, "perplexity": 224.73649487191776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00018.warc.gz"}
https://puzzling.stackexchange.com/questions/38650/what-is-the-next-number-in-the-series
# What is the next number in the series? What is the next number of this series? 1 , 4 , 5 , 6 , 7 , 9 , 11 ?
2019-04-22 18:55:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185968041419983, "perplexity": 114.89587698825017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578577686.60/warc/CC-MAIN-20190422175312-20190422201312-00530.warc.gz"}
https://artirj.github.io/BitcoinVolatility/
Published: # Analysis of Bitcoin Volatility Here I study whether Bitcoin’s volatility has been decreasing. It is widely reported that it has been the case, and indeed if one looks at Eli Dourado’s btcVol.info it seems to be the case. But is it the case? Here there is a previous analysis by a friend of mine showing that using statistical test (Augmented Dickey-Fuller) to test whether a series is stationary, and the result is that the data is compatible with a stationary process. That is, the underlying distribution from which daily returns are drawn is constant across time. In what follows, I try to replicate his study, showing some limitations of ADF. import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import datetime sns.set() %matplotlib inline from forex_python.bitcoin import BtcConverter b = BtcConverter() _=b.get_latest_price('USD') start_date=datetime.datetime(2011,1,1) end_date=datetime.datetime(2017,1,1) r=b.get_previous_price_list('USD', start_date, end_date) data=pd.DataFrame.from_dict(r,orient='index') data.columns=['Price'] data.sort_index(inplace=True) data.index=pd.to_datetime(data.index) data.reset_index(inplace=True) data.rename(columns={'index':'Date'},inplace=True) DatePrice 02011-01-010.300 12011-01-020.300 22011-01-030.295 32011-01-040.299 42011-01-050.299 _=data.plot(x="Date",y="Price") data['Daily change']=data['Price'].pct_change(1).multiply(100) _=data.plot(title="Bitcoin daily change (%)",x='Date',y='Daily change') data['Daily rolling 10']=data['Daily change'].rolling(window=10,center=False).std() data['Daily rolling 30']=data['Daily change'].rolling(window=30,center=False).std() data['Daily rolling 60']=data['Daily change'].rolling(window=60,center=False).std() plt.figure(figsize=(12,6)) _=ax=data.plot(title="Bitcoin daily change (%)",y='Daily rolling 10',x='Date') _=data.plot(y='Daily rolling 30',x='Date',ax=ax) _=data.plot(y='Daily rolling 60',x='Date',ax=ax) <matplotlib.figure.Figure at 0x7f2edc5fde80> #First, we could do some curvefitting, data['Date_n']=(data['Date']-data['Date'][0])/ np.timedelta64(1,'D') fig=plt.figure(figsize=(10,8)) sns.regplot(data=data,x='Date_n',y='Daily rolling 10',scatter_kws={"s": 10,"alpha":0.5},label='Daily rolling 10') sns.regplot(data=data,x='Date_n',y='Daily rolling 30',scatter_kws={"s": 10,"alpha":0.5},label='Daily rolling 30') sns.regplot(data=data,x='Date_n',y='Daily rolling 60',scatter_kws={"s": 10,"alpha":0.5},label='Daily rolling 60') plt.ylabel('Daily change (%)') plt.legend() import statsmodels.api as sm y=data['Daily rolling 10'] X=data['Date_n'] model.summary() Dep. Variable: R-squared: Daily rolling 10 0.178 OLS 0.178 Least Squares 472.5 Sun, 08 Jan 2017 5.36e-95 12:48:59 -5749.0 2183 1.150e+04 2181 1.151e+04 1 nonrobust coef std err t P>|t| [95.0% Conf. Int.] 6.9094 0.145 47.579 0.000 6.625 7.194 -0.0025 0.000 -21.738 0.000 -0.003 -0.002 Omnibus: Durbin-Watson: 812.011 0.072 0 3098.36 1.818 0 7.565 2550 Perhaps it would be better to use a nonlinear function, something like an exponential with a decreasing term like $f(x,a,b,c)=ae^{-xb}+c$ . For optimisation purposes, I noralise the series first. from scipy.optimize import curve_fit from sklearn.preprocessing import normalize def normalize(X,Y): return (X2-np.mean(X2))/np.std(X2),(y2-np.mean(y2))/np.std(y2) def func(x,a,b,c): return a*np.exp(-x*b)+c def plot(X2,y2,popt,n): plt.subplot(1,3,n) plt.plot(X2,func(X2,popt[0],popt[1],popt[2])) plt.plot(X2,y2) y=data['Daily rolling 10'] X=data['Date_n'] X1,y1=normalize(X,y) popt1, pcov =curve_fit(func,X1,y1) fig=plt.figure(figsize=(12,6)) plot(X1,y1,popt1,1) plt.title('Exponential fit for Daily rolling 10') print("Parameters:",popt1) y=data['Daily rolling 30'] X=data['Date_n'] X2,y2=normalize(X,y) popt2, pcov =curve_fit(func,X2,y2) plot(X2,y2,popt2,2) plt.title('Exponential fit for Daily rolling 30') print("Parameters:",popt2) y=data['Daily rolling 60'] X=data['Date_n'] X3,y3=normalize(X,y) popt3, pcov =curve_fit(func,X3,y3) plot(X3,y3,popt3,3) plt.title('Exponential fit for Daily rolling 60') print("Parameters:",popt3) plt.figure(figsize=(12,6)) plt.plot(X1,func(X1,popt1[0],popt1[1],popt1[2])-y1,label='Daily rolling 10') plt.plot(X2,func(X2,popt2[0],popt2[1],popt2[2])-y2,label='Daily rolling 30') plt.plot(X3,func(X3,popt3[0],popt3[1],popt3[2])-y3,label='Daily rolling 60') plt.title('Residual plot') plt.legend() Parameters: [ 0.26753345 1.07601779 -0.4516436 ] Parameters: [ 0.36183029 1.01534461 -0.57939952] Parameters: [ 0.42689696 0.98383072 -0.66565455] <matplotlib.legend.Legend at 0x7f2edc751748> I would say that the fit looks right (It seems to be exponentially decreasing, and it seems that variance is decreasing) especially if we remove some of the ‘big’ events like bubbles, etc. Now we bring on ADF (Which I don’t have experience using), checking the results we get for both a series with and without a trend. Here there is the documentation for this function. First I run some tests to see how the function behaves. The parameter of interest is the second in the returned array, a pseudo p-value. I’ll take it so that if p-value<0.05, we can’t reject the null hypothesis from statsmodels.tsa.stattools import adfuller #Here there *is* a trend. Null hypothesis is: constant (The variable follows a random-walk) #Because there is a trend, the test fails. plt.plot(np.linspace(0,10, 100)+np.random.randn(100)/10) (0.030371082537879741, 0.96097283593236305, 5, 94, {'1%': -3.5019123847798657, '10%': -2.5834538614757809, '5%': -2.8928152554828892}, -130.36360151635336) Here there is a trend. So we tell the test that there is, and ask: does the trend change? We can also do an ADF test assuming that there is a trend: plt.plot(np.linspace(0,10, 100)+np.random.randn(100)/10) (-6.0891054304079377, 1.4245907097885184e-06, 3, 96, {'1%': -4.0563093932201246, '10%': -3.1544345187717013, '5%': -3.4572550874385124}, -161.30436800688932) In the next example there is a trend that changes, but from a constant underlying function, a sin(x). The test still says the process doesn’t change. Using the ‘ct’ option also says that the process doesn’t change. As the changing trend itself (a cosine) is also stationary around zero np.random.seed(42) n=1000 space=np.linspace(0,100,n) var=np.concatenate([np.sin(space[:n//2]),np.sin(space[n//2:])])+np.random.rand(n)/100 plt.plot(var) (-14.936838558057197, 2.3543437151057883e-22, 22, 977, {'1%': -3.9680661492141187, '10%': -3.1297006047557043, '5%': -3.4149932715005722}, -8471.9770767955488) We can now try to see what would the test say if it saw the function we just fitted. plt.plot(func(X3,popt3[0],popt3[1],popt3[2])) (-16.794102051514088, 1.2352461745010984e-29, 25, 2107, {'1%': -3.4334573966160153, '10%': -2.5675007703726296, '5%': -2.8629127187998606}, -18524.80171562676) So even if the variance were decreasing given a very clean equation, the test would still say that the process is stationary, so the test is not that much useful after all. Ok, let’s go back to our Bitcoins and do the actual test def nice(data,regtype="c"): print("P-value for 10 days averaged volatility: ",nice(data['Daily rolling 10']),'***') print("P-value for 30 days averaged volatility: ",nice(data['Daily rolling 30']),'***') print("P-value for 60 days averaged volatility: ",nice(data['Daily rolling 60']),'**') P-value for 10 days averaged volatility: 8.59786631742e-05 *** P-value for 30 days averaged volatility: 0.000113952000692 *** P-value for 60 days averaged volatility: 0.034705270592 ** Thus we find that indeed for the 10 days average volatility, the data is consistent with a process with constant variance, and that remains valid for greater windows of averaging. Let’s now examine what happens if instead, we consider as our hypothesis that the series follows a trend (y=ax+b) print("P-value for 10 days averaged volatility: ",nice(data['Daily rolling 10'],'ct'),'***') print("P-value for 30 days averaged volatility: ",nice(data['Daily rolling 30'],'ct'),'***') print("P-value for 60 days averaged volatility: ",nice(data['Daily rolling 60'],'ct'),'**') P-value for 10 days averaged volatility: 4.36675365203e-05 *** P-value for 30 days averaged volatility: 5.24203107756e-05 *** P-value for 60 days averaged volatility: 0.0309130608266 ** Surprise surprise, the test also says this is true! ** We cannot reject the fact that the series is generated by a process with decreasing variance either! ** For completion, below I show the ADF test for a number of possible lags. The test by default choses one using the Bayesian Information Criterion. def makeplot(name,pos): plt.subplot(1,3,pos) for i in range(n_lags): plt.plot(range(n_lags),pvalues) plt.yscale('log', nonposy='clip') plt.ylabel('P-value') plt.xlabel('Number of lags for the ADF test') plt.legend(['Assuming constant','Assuming trend']) i=1 n_lags=30 pvalues=np.zeros((n_lags,2)) fig=plt.figure(figsize=(12,6)) makeplot('Daily rolling 10',1) makeplot('Daily rolling 30',2) makeplot('Daily rolling 60',3) # Conclusion The ADF test is not very useful for a time series like the BTC-USD exchange rate. Here I would go with plain OLS,that says that the coefficient of the feature ‘date’ is negative at the 95% confidence level. This trend persist even if we consider only, say, post 2014 or post 2015 data. Using a nonlinear exponential fit is even better. Thus the data warrants the conclusion that Bitcoin is, after all, becoming less volatile. Tags:
2020-04-02 06:20:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5762731432914734, "perplexity": 5290.161674554913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00442.warc.gz"}
https://space.stackexchange.com/questions/25728/what-would-be-the-minimum-energy-trajectory-for-an-inbound-intercept-course-for
# What would be the minimum energy trajectory for an inbound intercept course for another body? Let's say that you have a body inbound to Earth, and you've got plenty of time before it is within range. But you want to intercept it at minimum relative velocity, e.g., you're going to land on it or dock with it. Maybe you want to transfer fuel to a crippled inbound craft. What are the orbital dynamics of this? It seems like a highly elliptical orbit with the inbound body's trajectory intercepting the inbound leg of the ellipse like so: Is this a good idea? Is there a name for this approach? Also, what are the limits of this approach? Presumably there's a maximum inbound velocity or there's no elliptical orbit, there's just escape velocity. I'll assume the chaser (the active vehicle) is already in some orbit about the central body and that the target is in some other orbit about that central body. Given a time $t_0$ at which the chaser makes an impulsive maneuver to transfer to the target and a time $t_1 > t_0$ at which the chaser is to reach the target, there is always at least one conic section that has the central body as one of the foci. At the end, perform another impulsive maneuver to bring the relative velocity below the threshold value. Finding these conic sections is Lambert's problem. Voila! Intersect solved! The choice of $t_0$ and $t_1$ might well be suboptimal. It might require, for example, a trajectory that exceeds the speed of light (exceeding the speed of light is not a problem in Newtonian mechanics) or a trajectory that intersects the central body. There might well be a better choice. So, rinse and repeat with different choices for $t_0$ and $t_1$. Now you can generate a contour plot that shows cost (using on some metric of the two impulsive maneuvers) versus the times $t_0$ and $t_1$. The result is a pork chop plot: The above shows the costs of various transfer trajectories from Earth to Mars. Some choices of $t_0$ and $t_1$ are obviously better than others. The costs vary by orders of magnitude in the case of transferring from Earth to Mars. (The plot does not show the ridiculously expensive transfers.) There are two optimal routes, a "short way" that leaves in early August 2005 and arrives in late February 2007, and a "long way" that leaves in early September 2005 and arrives in early October 2007. These are the two bullseyes on the plot.
2020-01-20 21:27:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5171424746513367, "perplexity": 500.19034656703764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599789.45/warc/CC-MAIN-20200120195035-20200120224035-00109.warc.gz"}
https://matplotlib.org/3.2.0/api/_as_gen/mpl_toolkits.axisartist.axis_artist.html
# mpl_toolkits.axisartist.axis_artist¶ axis_artist.py module provides axis-related artists. They are • axis line • tick lines • tick labels • axis label • grid lines The main artist classes are AxisArtist and GridlinesCollection. While GridlinesCollection is responsible for drawing grid lines, AxisArtist is responsible for all other artists. AxisArtist has attributes that are associated with each type of artists: • line: axis line • major_ticks: major tick lines • major_ticklabels: major tick labels • minor_ticks: minor tick lines • minor_ticklabels: minor tick labels • label: axis label Typically, the AxisArtist associated with an axes will be accessed with the axis dictionary of the axes, i.e., the AxisArtist for the bottom axis is ax.axis["bottom"] where ax is an instance of mpl_toolkits.axislines.Axes. Thus, ax.axis["bottom"].line is an artist associated with the axis line, and ax.axis["bottom"].major_ticks is an artist associated with the major tick lines. You can change the colors, fonts, line widths, etc. of these artists by calling suitable set method. For example, to change the color of the major ticks of the bottom axis to red, use ax.axis["bottom"].major_ticks.set_color("r") However, things like the locations of ticks, and their ticklabels need to be changed from the side of the grid_helper. ## axis_direction¶ AxisArtist, AxisLabel, TickLabels have an axis_direction attribute, which adjusts the location, angle, etc.,. The axis_direction must be one of "left", "right", "bottom", "top", and follows the Matplotlib convention for rectangular axis. For example, for the bottom axis (the left and right is relative to the direction of the increasing coordinate), • ticklabels and axislabel are on the right • ticklabels and axislabel have text angle of 0 • ticklabels are baseline, center-aligned • axislabel is top, center-aligned The text angles are actually relative to (90 + angle of the direction to the ticklabel), which gives 0 for bottom axis. Parameter left bottom right top ticklabels location left right right left axislabel location left right right left ticklabels angle 90 0 -90 180 axislabel angle 180 0 0 180 ticklabel va center baseline center baseline axislabel va center top center bottom ticklabel ha right center right center axislabel ha right center right center Ticks are by default direct opposite side of the ticklabels. To make ticks to the same side of the ticklabels, ax.axis["bottom"].major_ticks.set_ticks_out(True) The following attributes can be customized (use the set_xxx methods): ## Classes¶ AttributeCopier(ref_artist[, klass]) [Deprecated] AxisArtist(axes, helper[, offset, ...]) An artist which draws axis (a line along which the n-th axes coord is constant) line, ticks, ticklabels, and axis label. AxisLabel(*args[, axis_direction, axis]) Axis Label. BezierPath(**kwargs) [Deprecated] GridlinesCollection(*args[, which, axis]) LabelBase(*args, **kwargs) A base class for AxisLabel and TickLabels. TickLabels(*[, axis_direction]) Tick Labels. Ticks(ticksize[, tick_out, axis]) Ticks are derived from Line2D, and note that ticks themselves are markers.
2021-02-26 04:50:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2494586557149887, "perplexity": 8547.29942952702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356140.5/warc/CC-MAIN-20210226030728-20210226060728-00244.warc.gz"}
https://www.elitedigitalstudy.com/1645/solve-the-linear-equation-m-m-12-1-m-23
Solve the linear equation m - $$\frac{m-1}{2} = 1 - \frac{m-2}{3}$$ Asked by Aaryan | 1 year ago |  66 ##### Solution :- m - $$\frac{m-1}{2} = 1 - \frac{m-2}{3}$$ L.C.M. of the denominators, 2 and 3, is 6. Multiplying both sides by 6, we obtain 6m - 3(m - 1) = 6 - 2(m - 2) ⇒ 6m - 3m + 3 = 6 - 2m + 4 (Opening the brackets) ⇒ 6m - 3m + 2m = 6 + 4 - 3 ⇒ 5m = 7 ⇒ m = $$\frac{7}{5}$$ Answered by Sakshi | 1 year ago ### Related Questions #### Solve the inequations and graph their solutions on a number line  – 1 < (x / 2) + 1 ≤ 3, x ε l Solve the inequations and graph their solutions on a number line  – 1 < ($$\dfrac{x}{2}$$) + 1 ≤ 3, x ε l #### Solve the inequations and graph their solutions on a number line – 4 ≤ 4x < 14, x ε N Solve the inequations and graph their solutions on a number line – 4 ≤ 4x < 14, x ε N #### Solve (x / 3) + (1 / 4) < (x / 6) + (1 / 2), x ε W. Also represent its solution on the number line. Solve ($$\dfrac{x}{3}$$) + ($$\dfrac{1}{4}$$) < ($$\dfrac{x}{6}$$) + ($$\dfrac{1}{2}$$), x ε W. Also represent its solution on the number line. If the replacement set is {-3, -2, -1, 0, 1, 2, 3}, solve the inequation $$\dfrac {(3x – 1) }{2} < 2$$. Represent its solution on the number line. Solve the inequations ($$\dfrac{3}{2}$$) – ($$\dfrac{x}{2}$$) > – 1, x ε N
2023-02-09 13:00:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6800190806388855, "perplexity": 1280.1114280708866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00556.warc.gz"}
https://yo-dave.com/2013/06/04/keyboard-shortcuts-for-javafx-buttons/
# Keyboard Shortcuts for JavaFX Buttons Most programs written for graphical user interfaces still provide a way to operate with the keyboard, requiring minimal mouse usage. The thought is that expert users will want to speed through their work keeping their fingers on the keyboard rather than devote an entire hands worth of fingers to controlling the mouse. I’ve been learning JavaFX, the eventual replacement for the Swing UI framework on Java, and wanted to explore how shortcut functionality had changed. There were a few tutorials on keyboard shortcuts for menu-driven programs, but nothing I could find on their use with button-based interfaces. That’s what I cover here. # Nomenclature There are really two groups of these shortcuts. The first group, keyboard shortcuts, are also called keyboard accelerators. They tend to be global. For example, most Windows programs allow you to save a file by simultaneously pressing the Control key and the S key, usually denoted Ctrl-S. (On Mac, the Ctrl key is usually replaced by the Command key.) You don’t have to have a menu open for this to work. The second group, keyboard mnemonics, only operate when the control element in question is visible and are activated with the Alt key plus a character key. For example, on most Windows programs the keyboard sequence Alt-H/Alt-A will display an “About” dialog. The sequence Alt-E/Alt-A will “select all”. The program behavior produced by pressing Alt-A will depend on which menu is displayed. With mnemonics, you navigate the menu system. With shortcuts, you bypass the menu system. If you are using menus. I will try to use the terms “shortcut” for keyboard accelerator and “mnemonic” for keyboard mnemonic consistently from this point on. # Mnemonics Setting up keyboard mnemonics is pretty straightforward in JavaFX. First, call the setMnemonicParsing() method for the button with an argument of true. Then, when you set the text to be displayed on the button, prefix the character to be used as the mnemonic with an underline. For example, to display the mnemonic “File” set the text to “_File”. It doesn’t seem to make any difference if you call setMnemonicParsing(true) and you don’t provide the underline in the label text. So it isn’t clear to me why the function is needed at all. Just enable the parsing by default and ignore labels that don’t have any underline. Maybe it is there to provide a way to display an underline in a label without having the next character interpreted as a mnemonic. For what it’s worth, the operation of mnemonics seems to have changed over the years. Current software, written in recent versions of Swing or JavaFX don’t show the underline for the mnemonic until the user presses the Alt key. I have earlier versions of programs that always show the mnemonic key underlined although the mnemonics no longer work. Various Windows programs themselves are not consistent in terms of mnemonic usage. For example, Firefox always shows the mnemonic keys underlined while NetBeans does not. # Shortcuts A good argument could be made that shortcuts are not very useful for a program with a button-based interface since buttons do not display the shortcut keys as do menus. The user would actually have to read some program documentation (egad!) in order to discover what shortcuts were available. However, many users are so used to some of the short cuts (Ctrl-S, Ctrl-O, Ctrl-C, Ctrl-V, etc.) that I include them anyway. Setting up shortcuts in JavaFX is a bit more complicated than it was in Swing since you can’t seem to do it at the same time you create the button. It seems that the button must be attached to a Scene before you have a way to add a shortcut. Maybe this is just due to my own ignorance. Here’s an example of how I set up a shortcut and mnemonic on an “Exit” button in a recent program. I hope the elided parts are obvious. ... (:import [ javafx.scene.control Button CheckBox RadioButton ToggleGroup Tooltip] [ javafx.scene.input KeyCode KeyCodeCombination KeyCombination KeyCombination$Modifier KeyCombination$ModifierValue] ... (defn init-exit-btn-accelerator [btn] (.put (.getAccelerators (.getScene btn)) (KeyCodeCombination. KeyCode/X (into-array KeyCombination\$Modifier [KeyCombination/SHORTCUT_DOWN])) (proxy [Runnable] [] (run [] (.fire btn))))) (defn create-exit-button-click-handler "Handle a click on the 'Exit' button." [] (reify EventHandler (handle [this event] (Platform/exit)))) (defn create-exit-btn [] (let [btn (btns/create-styled-button "E_xit" (create-exit-button-click-handler))] (.setTooltip btn (Tooltip. "Exit the program.")) (.setMnemonicParsing btn true) btn)) ... The button is created in the create-exit-btn function along with its event handler. The handling for the mnemonic is set up at the same time the button is created. In the elided code that follows, the button is added to a layout pane, which is eventually attached to a Scene. Some time after that, the init-exit-btn-accelerator function is called to set up the shortcut. Note that the shortcut and an action handler are set up and added in the same function. As should also be apparent, it is not necessary that using the shortcut must activate the same event handler as a mouse click would. It does in this case, but that doesn’t seem to be a requirement. The shortcut itself is set up by creating a KeyCodeCombination with the desired key. This is a bit more complicated than it was in Swing where a text string describing the key could be used as an argument. Not sure what advantage the new method provides. So every button that gets a shortcut gets this treatment. I haven’t done it yet, but it seems like a good candidate for writing a macro to handle creation of the functions from a few pieces of data. Something to do and write about in the future.
2018-08-17 03:06:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29674965143203735, "perplexity": 1948.197495581928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211664.49/warc/CC-MAIN-20180817025907-20180817045907-00085.warc.gz"}
https://fguelzau.netlify.app/publication/regional-mobility-spaces/
# Regional Mobility Spaces? Visa Waiver Policies and Regional Integration ### Abstract Visa policies today are a central instrument for filtering wanted and unwanted types of travellers, leading to a hierarchy of mobility rights. While there is evidence of a “global mobility divide”, we still know little about the role of regional integration when it comes to the distribution of mobility rights and the (re)structuring of mobility spaces. Against this background, the article examines the structure of visa relations in different bodies of regional integration (EU, MERCOSUR, ASEAN, ECOWAS, EAC, NAFTA, SADC and SICA). The article compares visa policies in the member states of these institutions in 1969 and 2010 from a social network perspective. While one would generally expect each institution’s member states to become more similar with regard to both internal and external mobility regulations, we find that not all regional clusters align their visa policies. Potential explanations for this state of affairs are investigated. Type Publication In International Migration Date More detail can easily be written here using Markdown and $\rm \LaTeX$ math code.
2020-07-13 08:37:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1993878334760666, "perplexity": 3418.0445761691194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143354.77/warc/CC-MAIN-20200713064946-20200713094946-00489.warc.gz"}
https://forum.theotown.com/viewtopic.php?style=1&f=48&t=5806&p=81075
## ✔️Empty subject-lines (Re: ) Discussion for forum related things. mdk_813 Inhabitant of a Country Reactions: Posts: 860 Joined: Fri Dec 16, 2016 2:38 Location: Germany Plugins: Show ### ✔️Empty subject-lines (Re: ) Hi, I've noticed that for some reason the subject line is often empty now, when you want to post a reply to a topic. As a result, numerous postings of different users made in recent days just show "Re:" and then nothing, instead of "Re:Example topic". I think this should be fixed because it hurts the usability of the forums as you cannot determine the topic of a reply in the Board index at first glance anymore. Update Sep 12, 2020: As I am not really active here anymore, I am not able to answer any PMs in time. So, as long as you mention me in the plugin and the description, feel free to use or modify my plugins for your own creations. CommanderABab AB Reactions: Posts: 8785 Joined: Tue Jun 07, 2016 21:12 Plugins: Show Version: Beta ### Re: Empty subject-lines (Re: ) It's the ghost of Aizat! mdk_813 Inhabitant of a Country Reactions: Posts: 860 Joined: Fri Dec 16, 2016 2:38 Location: Germany Plugins: Show ### Re: Empty subject-lines (Re: ) Yes, that is the most likely explanation. No but honestly, it bothers me. Update Sep 12, 2020: As I am not really active here anymore, I am not able to answer any PMs in time. So, as long as you mention me in the plugin and the description, feel free to use or modify my plugins for your own creations. TheFennekin Inhabitant of a Galaxy Reactions: Posts: 2001 Joined: Thu Aug 24, 2017 11:17 Plugins: Show Version: Beta ### Re: Empty subject-lines (Re: ) It bothers me too... Grrrr こんにちは! CommanderABab AB Reactions: Posts: 8785 Joined: Tue Jun 07, 2016 21:12 Plugins: Show Version: Beta ### Re: Empty subject-lines (Re: ) CommanderABab AB Reactions: Posts: 8785 Joined: Tue Jun 07, 2016 21:12 Plugins: Show Version: Beta ### Re: It happens when post reply is used. CommanderABab AB Reactions: Posts: 8785 Joined: Tue Jun 07, 2016 21:12 Plugins: Show Version: Beta ### Re: Empty subject-lines (Re: ) CommanderABab wrote: Sat Jan 20, 2018 12:33 Doesn't happen when quoting. CommanderABab AB Reactions: Posts: 8785 Joined: Tue Jun 07, 2016 21:12 Plugins: Show Version: Beta ### Re: Empty subject-lines (Re: ) Doesn't happen when Full editor & Preview is used. Lobby Developer Reactions: Posts: 3590 Joined: Sun Oct 26, 2008 12:34 Plugins: Show Version: Beta ### Re: Empty subject-lines (Re: ) Any idea since when this is happening? I can't reproduce it :/ =^._.^= ∫ mdk_813 Inhabitant of a Country Reactions: Posts: 860 Joined: Fri Dec 16, 2016 2:38 Location: Germany Plugins: Show ### Re: Empty subject-lines (Re: ) Lobby wrote: Sat Jan 20, 2018 15:18 Any idea since when this is happening? I can't reproduce it :/ I think it started to happen a couple of days ago. Try posting a reply in a topic that doesn't have a quick reply box, "My last screenshot..." for example. When you hit reply and the new page with a comment box opens then the subject-line only shows "Re:". But it is possible that there are other cases too. Update Sep 12, 2020: As I am not really active here anymore, I am not able to answer any PMs in time. So, as long as you mention me in the plugin and the description, feel free to use or modify my plugins for your own creations. CommanderABab AB Reactions: Posts: 8785 Joined: Tue Jun 07, 2016 21:12 Plugins: Show Version: Beta ### Re: Attachments Lobby Developer Reactions: Posts: 3590 Joined: Sun Oct 26, 2008 12:34 Plugins: Show Version: Beta ### Re: Empty subject-lines (Re: ) Bug fixed Sorry for the inconvenience, I added a new feature to the forum (fill the contents of a post via url) and didn't test it properly. =^._.^= ∫ CommanderABab AB Reactions: Posts: 8785 Joined: Tue Jun 07, 2016 21:12 Plugins: Show Version: Beta ### Re: Empty subject-lines (Re: ) Thanks for the fix! mdk_813 Inhabitant of a Country Reactions: Posts: 860 Joined: Fri Dec 16, 2016 2:38 Location: Germany Plugins: Show ### Re: Empty subject-lines (Re: ) [fixed] Yes, thanks! Update Sep 12, 2020: As I am not really active here anymore, I am not able to answer any PMs in time. So, as long as you mention me in the plugin and the description, feel free to use or modify my plugins for your own creations. TheFennekin Inhabitant of a Galaxy Reactions: Posts: 2001 Joined: Thu Aug 24, 2017 11:17
2020-09-30 08:17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5055522322654724, "perplexity": 12269.51976827339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402123173.74/warc/CC-MAIN-20200930075754-20200930105754-00420.warc.gz"}
http://openstudy.com/updates/56068667e4b032660b20274e
## Loser66 one year ago Make it clear, please. I confused. How to turn it to i = 0 $$\sum_{i=1}^n a^i$$ to $$\sum_{i=0}^{??}$$ Please, help 1. anonymous $\sum_{i=0}^{n}(i)^(n-1)$ 2. Loser66 I was taught that when we replace i =1 from i =0, whenever I saw i, subtract 1 Hence the lower limit is i=0, the summand is $$a^{n-1}$$ . How about the upper limit? 3. Loser66 @AliLnn cannot be n as it is on original one. :( 4. Loser66 sorry, *$$a^{i-1}$$ 5. Loser66 Ex: $$\sum_{i=1}^5 i =1 +2+3+4+5$$ when I turn it to i =0, the sum must be $$\sum_{i=0}^6 i = 0 +1+2+3+4+5$$ to make it the same with the original one, right? 6. Loser66 At that moment , the upper limit is n +1, not n-1, That makes me confused 7. ganeshie8 $$\sum_{i=1}^n a^i$$ Let $$j=i-1$$ : $$i=1\implies j=0$$ $$i=n\implies j=?$$ 8. Loser66 @ganeshie8 j = n-1, but how to argue for my example above? 9. Empty $\sum_{i=1}^{i=n} a^i$ You want the lower bound to be 0, so we do algebra on the bottom part since it is but a simple equation! $\sum_{i-1=0}^{i=n} a^i$ Now we see our substitution clearly: $i-1=j$ Rearrange this to plug in our new value everywhere: $i=j+1$ Goes in: $\sum_{i-1=0}^{i=n} a^i=\sum_{j=0}^{j+1=n} a^{j+1}=\sum_{j=0}^{j=n-1} a^{j+1}$ This is what you must do and nothing more, it is just tiny equations on top of and below a funny greek letter. :P 10. ganeshie8 your example just happens to be a special case : 0 + anything = anything that doesn't mean, we don't have two terms on the left hand side. 11. ganeshie8 maybe try contrasting below two identical sums : $$\sum\limits_{i=1}^6e^i$$ and $$\sum\limits_{i=0}^5e^{i+1}$$ 12. Loser66 Ex2: $$\sum_{i=1}^5 3i^2=3\sum_{i=1}^5 i^2 =3(1^2+2^2+3^2+4^2+5^2)$$ the first element is 3, the last one is 75 If I want to turn it from 0 to ?? the sum must stay as it is. right?$$\sum_{i =0 }^4 3(i^2) = 3(0^2+ 1^2+2^2+3^2 +4^2)$$ Surely, the sum doesn't stay as it is. :( 13. ganeshie8 I see, you must do the substitution inside also. : $$\sum_{i=1}^5 3i^2=3\sum_{i=1}^5 i^2 =3(1^2+2^2+3^2+4^2+5^2)$$ substitute $$j=i-1$$, the sum is same as : $$\sum_{j=1-1}^{5-1} 3(j+1)^2=3\sum_{j=0}^4 (j+1)^2 =3(1^2+2^2+3^2+4^2+5^2)$$ 14. ganeshie8 Notice that both the series are one and same and they match term by term 15. Loser66 Got this part :) Now other example 16. Loser66 $$\sum_{i =1}^5 3^i$$ 17. ganeshie8 $$\sum_{i =1}^5 3^i$$ substitute $$j=i-1$$, the sum can be rewritten as $$\sum_{j =1-1}^{5-1} 3^{j+1}$$ 18. Loser66 Thank you so so much. I got it now. :)
2016-10-22 19:55:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809099555015564, "perplexity": 1080.8239374511952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719041.14/warc/CC-MAIN-20161020183839-00311-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.theoremdex.org/d/1108
Measurable space An D548: Ordered pair $M = (X, \mathcal{F})$ is a measurable space if and only if (1) $X$ is a D11: Set (2) $\mathcal{F}$ is a D84: Sigma-algebra on $X$ Also known as Measurable subset structure Child definitions
2022-06-27 11:05:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996223509311676, "perplexity": 2688.7341234368664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00010.warc.gz"}
https://pv-manufacturing.org/metrology/recombination-processes/
# Recombination processes Recombination is the opposite process of generation, and involves the annihilation of an electron-hole pair. Recombination is classified as either intrinsic or extrinsic, whereby intrinsic recombination processed in silicon are radiative and Auger recombination, and extrinsic recombination is recombination via defects — commonly referred to as Shockley Read Hall (SRH, also referred to as trap-assisted) recombination. Radiative recombination is the reverse process of electron-hole pair generation via the absorption of a photon. In this process, an electron in the conduction band relaxes to the valence band, recombining with an empty state, emitting all or some of its excess energy as a photon. For indirect bandgap semiconductors, where the minimum energy state in the conduction band and the maximum energy state in the valence band have a different momentum k-vector, this process involves conservation of momentum via phonon emission. Due to the involvement of a phonon, radiative recombination is suppressed in indirect semiconductors such as silicon. For a direct semiconductor the maximum in the valence band and the minimum in the conduction band are aligned in k-space, consequently, the transition does not require a phonon an its probability is higher. The net recombination rate for radiative recombination is given by [1]: $U_{rad}=Bnp$,     (1) where B is the spectral radiance of a body, or radiative recombination coefficient for the material, n and p are the electron and hole concentrations respectively. Values for B have been evaluated experimentally and in this work a value of B = 4.73×10-15 cm3·s-1 at 300 K is used [2]. The corresponding lifetime component for radiative recombination is given by the expression [1]: $\tau_{rad}=\frac{\triangle n}{U_{rad}}=\frac{1}{B(n_{0}+p_{0}+\triangle n})$, (2) At present, radiative recombination is not the dominating loss mechanism in silicon-based photovoltaic devices and is included here for completeness. #### Auger Recombination Band-to-band Auger recombination involves an electron in the conduction band transmitting excess energy to a third charge carrier (either a hole in the valence band or an electron in the conduction band) and relaxes to the valence band. This process does not involve the emission of a photon since the energy is transferred to a third charge carrier which absorbs both the energy and momentum and returns to its original state, for instance, by the emission of phonons [3-6]. In Auger recombination the excess energy and momentum can either be transferred to another electron (“eeh” process) or to another hole (“ehh” process) [7]. These transitions are modelled to be interactions between noninteracting quasi-free particles so that the rate of recombination is proportional to the product of the concentration of each participating particle [3, 4, 7, 8]. The corresponding recombination rates of “ehh” and “eeh” processes are given by the formulae Ueeh = Cnn2p and Uehh = Cpnp2, where Cn and Cp are the Auger coefficients of electrons and holes respectively. Commonly used values of Cn and Cp were found by Dziewior and Schmid to be Cn = 2.8×10-32 cm6s-1 and Cp = 9.9×10‑32 cm6s-1 at 300 K [9].  The net recombination rate from Auger processes is the sum of Ueeh and Uehh given by: $U_{Aug}=U_{eeh}+U_{ehh}=C_{n}n^{2}p+C_{p}np^2$, (3) In reality, the particles involved are not non-interacting quasi-free particles and, consequently, the Auger recombination rate is enhanced by Coulombic interactions of holes and electrons. To account for these effects, the Coulomb-enhanced Auger recombination rate enhancement factors geeh and gehh are multiplied by Cn and Cp respectively, giving enhanced Auger coefficients, C*n = Cn.geeh  and C*p = Cp.gehh [10]. Kerr and Cuevas devised an empirical parameterisation for the intrinsic lifetime at 300 K taking into account Auger processes, radiative recombination according to Schlangenotto et al. [12], the dopant level and the excess carrier density. In that work the Auger recombination rate is given as [13] $U_{Aug}=np\left(1.8\ \times\ {10}^{-24}n_0^{0.65}+6\ \times{\ 10}^{-25}p_0^{0.65}+3\ \times{\ 10}^{-27}\mathrm{\Delta}n^{0.8}\right)$ (4) with the intrinsic lifetime (taking into account radiative and Auger processes) given as $\tau_{intrinsic,\ Kerr}=\frac{\mathrm{\Delta n}}{np\left(1.8\ \times{10}^{-24}n_0^{0.65}+6\ \times{10}^{-25}p_0^{0.65}+3\times{10}^{-27}\mathrm{\Delta}n^{0.8}+9.5\ \times{10}^{-15}\right)}$. (5) As surface passivation techniques improved, measurements exceeding the intrinsic limit proposed by Kerr and Cuevas were observed, consequently Richter et al. further refined the parameterisation based off new measurements. In their work, the intrinsic lifetime is given as $\tau_{intrinsic,\ Richter}=\frac{\mathrm{\Delta n}}{(np-ni_{eff}^2)\left(2.5\ \times{10}^{-31}g_{eeh}n_0+8.5\ \times{10}^{-32}g_{ehh}p_0+3\times{10}^{-20}\mathrm{\Delta}n^{0.92}+B_{rel}B_{low}\right)}$. (6) The enhancement factors are given by $g_{eeh}\left(n_0\right)=1+13\left\{1-tanh\left[\left(\frac{n_0}{N_{0,eeh}}\right)^{0.66}\right]\right\}$,  (7) $g_{eeh}\left(p_0\right)=1+7.5\left\{1-tanh\left[\left(\frac{p_0}{N_{0,eeh}}\right)^{0.63}\right]\right\}$, (8) where N0,eeh = 3.3×1017 cm-3 and N0ehh = 7.0×1017 cm-3, Blow is the relative radiative recombination coefficient = 4.73×10-15 cm3·s-1 at 300 K [2], and Brel is the relative radiative recombination coefficient according to Ref. [14]. Auger recombination is an intrinsic property of materials like silicon and together with radiative recombination determines the upper limit of photovoltaic device performance. Auger recombination processes is the dominant intrinsic recombination mechanism for a wide range of doping an injection levels—particularly in the high injection and highly doped case [15]. #### Recombination via Defects Defects and impurities in the crystal lattice form discrete energy levels within the forbidden energy gap. These levels allow for electrons in the conduction band to be captured by these states before relaxing into the conduction band via recombination with a hole. The statistical model for this process, known as Shockley‑Read‑Hall (SRH) theory [16, 17], models the transitions of four processes, shown in Figure 1: • An electron in the conduction band is captured by an empty defect level. • A hole in the conduction band is captured by a filled defect level. • Electron emission by a filled defect level towards the valence band. • Hole emission by an empty defect level. It is assumed that the states are non-interacting and the capture and relaxation time of carriers by the defect is non-limiting. For a single state located at some energy Et, the recombination rate USRH is given by $U_{SRH}=\ \frac{pn-n_i^2}{\tau_{p0}\left(n+n_1\right)+\tau_{n0}\left(p+p_1\right)},$,                                    (9) where the numerator term represents the deviation from thermal equilibrium, which drives the net recombination rate. The terms n1 and p1 are the electron and hole concentrations the defect energy level, Et (i.e. EFn = EFp = Et), given by $n_1\equiv N_cexp\left(\frac{E_t-E_c}{kT}\right)$,                                              (10a) $p_1\equiv N_vexp\left(\frac{E_v-E_t}{kT}\right)$.                                              (10b) These latter terms may be approximated by their non-degenerate forms, by replacing the exponent by the Fermi-Dirac operator, F1/2, when located away from the band edges. The terms τp0 and τn0 are the capture time constants of holes and electrons respectively are defined as $\tau_{p0}\equiv{\sigma_pv_{th,\ p}N}_t$ ,                                                     (11a) $\tau_{n0}\equiv{\sigma_nv_{th,n}N}_t$ .                                                     (11b) Where vth is the thermal velocity, Nt is density of defect sites and σn and σp are the capture cross section of electron and holes respectively. The capture cross sections represent the cross-section for capture, such that the multiple of the capture cross-section and the thermal velocity represents the probability per unit time that an electron or hole in the energy range is captured at a site. The thermal velocity of electrons and holes and its temperature dependence can be modelled independently according to Green [18], however more commonly, this value is considered equal for electrons and holes and is approximated as 107 cm/s at 300 K in silicon. The text in this section has been reproduced with permission from To, A., Improved carrier selectivity of diffused silicon wafer solar cells. 2017, University of New South Wales, Sydney ,
2019-07-24 02:42:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7514632940292358, "perplexity": 1162.128168332904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530250.98/warc/CC-MAIN-20190724020454-20190724042454-00502.warc.gz"}
http://www.jstor.org/stable/10.1086/519499
# The Biasing Health Halos of Fast‐Food Restaurant Health Claims: Lower Calorie Estimates and Higher Side‐Dish Consumption Intentions Pierre Chandon and Brian Wansink Journal of Consumer Research Vol. 34, No. 3 (October 2007), pp. 301-314 DOI: 10.1086/519499 Stable URL: http://www.jstor.org/stable/10.1086/519499 Page Count: 14 You are not currently logged in. Access your personal account or get JSTOR access through your library or other institution: # The Biasing Health Halos of Fast‐Food Restaurant Health Claims: Lower Calorie Estimates and Higher Side‐Dish Consumption Intentions Pierre Chandon Brian Wansink * John Deighton served as editor and Stephen Hoch served as associate editor for this article. Why is America a land of low‐calorie food claims yet high‐calorie food intake? Four studies show that people are more likely to underestimate the caloric content of main dishes and to choose higher‐calorie side dishes, drinks, or desserts when fast‐food restaurants claim to be healthy (e.g., Subway) compared to when they do not (e.g., McDonald’s). We also find that the effect of these health halos can be eliminated by simply asking people to consider whether the opposite of such health claims may be true. These studies help explain why the success of fast‐food restaurants serving lower‐calorie foods has not led to the expected reduction in total calorie intake and in obesity rates. They also suggest innovative strategies for consumers, marketers, and policy makers searching for ways to fight obesity. Keywords: Health, Nutrition, Safety Motivation/Desires/Goals Assimilation/Contrast Public Policy Issues Experimental Design and Analysis (ANOVA) As the popularity of healthier menus increases, so does the weight of many Americans. Between 1991 and 2001, the proportion of obese U.S. adults has grown from 23% to 31% of the population, a 3% annual compound rate (National Center for Health Statistics 2002). In the same period, the proportion of U.S. adults consuming low‐calorie food and beverages grew from 48% to 60% of the population (a 2.3% annual compound rate), and the proportion of U.S. consumers trying to eat a healthy diet grew at a 6% annual rate (Barrett 2003; Calorie Control Council National Consumer Surveys 2004; Food Marketing Institute 2005). In the past 5 years, fast‐food restaurants positioned as healthy (e.g., Subway) have grown at a much faster rate than those not making these claims (e.g., McDonald’s). For example, Subway’s television commercial starring Jared Fogle showing that Subway’s turkey sandwich has only 280 calories, half the 560 calories of a Big Mac, was the most recalled television commercial during the 2004 holidays (Advertising Age 2005). This parallel increase in obesity rates and in the popularity of healthier foods with lower calorie and fat density has been noted in consumer research (Seiders and Petty 2004) and in health sciences as “the American obesity paradox” (Heini and Weinsier 1997). The original explanation of the American obesity paradox was that people burn fewer calories than they used to because of technological progress and changing lifestyles (Heini and Weinsier 1997). However, this explanation is now contested. First, the last 4 decades have actually seen an increase in leisure‐time physical activity and a decline in the proportion of sedentary people (Talbot, Fleg, and Metter 2003). Second, Heini and Weinsier relied on self‐reported data, which strongly underestimate increases in actual calorie intake (Chandon and Wansink 2007; Livingstone and Black 2003). In fact, the U.S. Department of Agriculture data on food supply (Putnam, Allshouse, and Kantor 2002) show that calorie supply and calorie intake (computed by subtracting food losses at home and at all levels of the supply chain) have both increased by 18% since 1983 (reaching, respectively, 3,900 and 2,800 calories per person and per day in 2000). As a result, most recent reviews of obesity research, from fields as diverse as economics and epidemiology, attribute rising obesity rates to increased calorie intake and not to decreased calorie expenditures (Cutler, Glaeser, and Shapiro 2003; Kopelman 2000). In this article, we propose and test a halo‐based explanation for a specific facet of the American obesity paradox: the simultaneous increase in obesity and in the popularity of restaurants serving lower‐calorie foods and claiming to be healthier. We argue that the health claims made by these restaurants lead consumers to (1) underestimate the number of calories contained in their main dishes and (2) order higher‐calorie side dishes, drinks, or desserts. Taken together, these two effects can lead to more overeating (defined as undetected excessive calorie intake) when ordering from restaurants positioned as healthy than from restaurants not making this claim. Health halos can therefore explain why the increased popularity of healthier fast‐food restaurants has not led to the expected reduction in total calorie intake and in obesity rates. Studying how health claims influence calorie estimations and the choice of side dishes helps bridge the multidisciplinary obesity research efforts in health sciences and consumer research. The Food and Drug Administration has singled out away‐from‐home consumption as a critical contributor to overeating (Food and Drug Administration 2006). Still, biased calorie estimations of restaurant foods are less frequently noted in health sciences than the other factors contributing to overeating, such as the increase in portion size (Ledikwe, Ello‐Martin, and Rolls 2005; Nielsen and Popkin 2003), the higher availability of ready‐made foods (Cutler et al. 2003), or the lower prices of calorie‐rich, nutrient‐poor foods (Hill et al. 2003). Consumer researchers have extensively studied biased nutrition inferences (e.g., Andrews, Netemeyer, and Burton 1998; Moorman et al. 2004), but they have focused on nutrition evaluation and purchase decisions rather than calorie estimations or consumption decisions. Our health halo results also contribute to the literature on consumer trade‐offs between vice and virtue goals by providing evidence (based on real choices rather than on scenarios) that people balance health and taste goals in single consumption episodes (e.g., Dhar and Wertenbroch 2000; Kivetz and Simonson 2002; Okada 2005; Osselaer et al. 2005). More generally, our findings that healthy eaters underestimate calories more than unhealthy eaters show the limits of a purely motivational perspective, which would instead predict the opposite based on guilt or self‐presentation goals. In this article, we start by reviewing the various inferential and self‐regulatory mechanisms that may explain how health claims influence calorie estimations and a consumer’s choice of complementary food and beverages. In one field study, we show that calorie estimations are significantly lower for Subway meals than for comparable meals eaten at McDonald’s. These results are confirmed in a within‐subjects laboratory study, which also shows that nutrition involvement improves the accuracy of calorie estimations but does not reduce the halo effects of health claims. A third study shows that health claims lead consumers to unknowingly order beverages and side dishes containing more calories. Although it does not elucidate which specific mechanism is responsible for health halos, the fourth study demonstrates how asking a consumer to “consider the opposite” eliminates the biasing effects of health halos on calorie estimation and on side‐dish orders. Finally, we discuss the implications of our findings for research and for reducing the negative effects of health claims in away‐from‐home and in‐home consumption. ## Conceptual Framework ### How Health Claims Influence Calorie Estimations Restaurants are exempted from the U.S. 1990 Nutrition Labeling and Education Act, which made calorie and other nutrition information mandatory for packaged goods. In the absence of nutrition information, it is very difficult to estimate calorie content through visual inspection or sensory satiation (Chandon and Wansink 2007; Livingstone and Black 2003). Even when consumers know the list of ingredients included in a meal, they have difficulty estimating portion sizes (Nestle 2003). Consumers asked to estimate the number of calories contained in a meal must therefore make inferences based on internal and external cues, such as the health positioning of the restaurant’s brand. The ambiguity of sensory experience also increases the chances that calorie estimations are influenced by the activation of specific consumption goals, by feelings of guilt, or by self‐presentation motives (Wansink and Chandon 2006). #### Inferential Mechanisms. Consumers frequently draw inferences about missing attributes from the brand positioning or from the attributes of comparable products (for a review, see Kardes, Posavac, and Cronley [2004]). For example, Ross and Creyer (1992) found that, if an attribute is missing, consumers rely on the same attribute information from other brands in the same category. This suggests that consumers may make inferences about the number of calories in a particular food from the health positioning of the restaurant brand or from other food items on the restaurant’s menu. Selective accessibility is one of the models that can explain the assimilation of calorie estimations to the health claims of the restaurant. Selective accessibility contends that, unless consumers are specifically asked to consider the opposite, they will spontaneously test whether the target food is similar to the healthy standards or to the specific calorie anchor advertised by the restaurant. This increases the accessibility of standard‐consistent information, leading to the assimilation of calorie estimations to the anchor (for a review, see Mussweiler [2003]). Another explanation is provided by a Brunswikian model (e.g., Fiedler 1996), which assumes that consumers normatively aggregate the information provided by the intrinsic and extrinsic cues available. In a noisy environment, extrinsic cues such as quantity anchors can bias estimations even if a consumer is not directly influenced by motivational or memory‐based biases (Chandon and Wansink 2006). Conversational norms can also contribute to the influence of health claims because consumers typically assume that the advertised information is required by law to be truthful and would therefore see no reason not to draw inferences from it (Johar 1995). #### Self‐Regulatory Mechanisms. Two conflicting goals are salient when making food consumption decisions: the hedonic goal of taste enjoyment and the more utilitarian goal of maintaining good health (Dhar and Simonson 1999; Fishbach, Friedman, and Kruglanski 2003). Many studies have shown that health primes can activate different consumption goals. Priming hedonic goals and concepts, such as sweetness, increases the intensity of desire for hedonic food (such as cookies) and leads consumers to choose this better‐tasting but less healthy option over a less tasty but healthier option (e.g., Ramanathan and Menon 2006; Shiv and Fedorikhin 1999). Health primes can also influence guilt and self‐presentation goals. Okada (2005) found that restaurant diners were more likely to order “Cheesecake deLite,” a relatively healthy dessert, than “Bailey’s Irish Cream Cheesecake,” a relatively unhealthy dessert, when they were presented side by side on the menu but preferred the unhealthy dessert to the healthy one when each was presented alone. She attributes these findings to the fact that joint presentation increases guilt and the difficulty of social justification. The effects of health primes on goal activation and guilt predict a contrast effect for calorie estimation rather than the assimilation effect predicted by inferential mechanisms. To reduce their feelings of guilt and to justify their activated hedonic goal, consumers should report lower calorie estimations in the unhealthy prime condition than in the healthy prime condition. Supporting this argument, studies in nutrition and epidemiology have found that the individual trait of fear of attracting a negative evaluation is correlated with the tendency to underreport calories (Tooze et al. 2004). #### Hypotheses. Support for the inferential arguments can be found in the many studies showing that consumers generalize health claims inappropriately (Balasubramanian and Cole 2002; Garretson and Burton 2000; Keller et al. 1997; Moorman 1996). For example, Andrews et al. (1998) found that consumers believe that foods low in cholesterol are also low in fat, and consumers eating an energy bar they believed to contain soy rated it higher in nutritional value but lower in taste (Wansink 2003). These halo effects also apply to restaurant menus. Kozup, Creyer, and Burton (2003) found that adding a “heart‐healthy” sign on a menu reduced the perceived risk of heart disease when objective nutritional information was absent, even though it was placed next to an objectively unhealthy menu item (lasagna). In contrast, the few studies attempting to manipulate motivational factors have found little impact on calorie estimations. Muhlheim et al. (1998) directly manipulated guilt and self‐presentation motives through a “bogus pipeline” procedure, which consisted of warning some of the study participants that the accuracy of their calorie estimations would be objectively assessed. They found that the bogus pipeline manipulation only slightly increased self‐reported consumption, from 55% to 61% of actual food intake. McKenzie et al. (2002) manipulated guilt and self‐presentation motives by using either an obese interviewer or one with a normal weight to conduct in‐person food intake interviews. They found that the body mass of the interviewer had no impact on food intake estimations. Given these results, we expect that calorie estimations are primarily driven by inferential mechanisms and are thus assimilated toward the health claims made by the restaurant. ### How Health Claims Influence Complementary Food Decisions Complementary food decisions are those pertaining to the choice of side orders, drinks, or desserts ordered following one’s choice of a main course (Dhar and Simonson 1999). Existing research has only examined the effects of health claims on the choice and consumption of the advertised food, and its evidence is mixed. Kozup et al. (2003) found that adding a “heart‐healthy” claim to a menu increased consumers’ intentions to order the food. However, Raghunathan, Naylor, and Hoyer (2006) found that labeling food as “healthy” reduced the likelihood that it would be chosen because of negative taste inferences. Other studies have found that the preference for healthy foods depends on the degree of ego depletion (Baumeister 2002), cognitive load (Shiv and Fedorikhin 1999), guilt and the need for justification (Okada 2005), individual differences in body mass (Wansink and Chandon 2006), comparison frames (Wansink 1994), and the accessibility of chronic hedonic goals (Ramanathan and Menon 2006; Ramanathan and Williams 2007). In contrast, the evidence regarding the effects of health claims on complementary food decisions is more consistent. In a series of vignette studies, Dhar and Simonson (1999) found that consumers predict that people prefer to balance an unhealthy main course with a healthy dessert, or a healthy main course with an unhealthy dessert, rather than choosing two healthy or unhealthy main courses and desserts. Fishbach and Dhar (2005) found that increasing perceived progress toward the goal of losing weight activates the hedonic taste goals and increases the likelihood that people choose a chocolate bar over an apple. Guilt is one of the explanations why consumers tend to balance health and taste goals within a single consumption episode. Ramanathan and Williams (2007) found that some consumers are able to launder the guilt created by their choice of an indulgent cookie by choosing the utilitarian option in a subsequent choice. We therefore expect that, once the choice of the main course has been made, consumers will choose side orders, desserts, and beverages containing more calories if the main course is positioned as healthy than if it is not. ### Moderating Factors Clearly, not all consumers base their food consumption decisions on health or nutrition considerations. One might expect that consumers highly involved in nutrition would be more knowledgeable about it and less likely to be influenced by health claims (Wansink 2005). Yet, past research suggests that nutrition involvement may not moderate the effects of health claims. Moorman (1990) found that nutrition involvement increases the self‐assessed ability to process nutrition information but does not improve nutrition comprehension or the nutrition quality of food choices in two product categories. Two studies (Andrews, Burton, and Netemeyer 2000; Andrews et al. 1998) found that objective nutrition knowledge improves the accuracy of some nutrition evaluations but does not significantly reduce erroneous inferences across nutrients or the effectiveness of objective nutrient information in reducing these overgeneralizations. More generally, studies have found that association‐based errors, such as those resulting from priming, cannot be corrected by increasing incentives and the degree of elaboration (Arkes 1991). In fact, Johar (1995) found that highly involved consumers are more likely to be deceived by implied advertising claims because involvement increases the likelihood of making invalid inferences from incomplete‐comparison claims, such as “this brand’s sound quality is better.” Chapman and Johnson (1999) showed that cognitive elaboration, one of the consequences of involvement, actually enhances anchoring effects because it facilitates the selective retrieval of anchor‐consistent information. For these reasons, we expect that nutrition involvement increases the overall accuracy of calorie estimations but does not moderate the effects of health claims on calorie estimations and on complementary food decisions. How can health halos be reduced? If calorie inferences are partly caused by priming and selective activation, one solution is to encourage consumers to question the validity of the health prime. Drawing attention to the priming source reduces priming effect even if the activation of information in memory occurred nonconsciously (Strack et al. 1993). The effectiveness of the debiasing strategy is enhanced if people are asked to consider evidence inconsistent with the prime. Mussweiler, Strack, and Pfeiffer (2000), working on the estimation of the value of a used car, showed that instructing people to consider whether a claim opposite to the one primed may be true increases the accessibility of claim‐inconsistent knowledge and therefore reduces selective‐accessibility biases. In summary, we predict that health claims reduce calorie estimations for the main dishes served by fast‐food restaurants and lead consumers to order high‐calorie complementary food or drinks. We also expect that asking consumers to consider whether opposite health claims may be equally valid eliminates the effects of health halos on main‐dish calorie estimation and side‐dish choices. We test these predictions in one field study and in three laboratory experiments. ## Study 1: Calorie Estimations by Subway and McDonald’s Diners ### Method We asked consumers who had just finished eating at McDonald’s or Subway to estimate the number of calories contained in their meal, and we then compared their estimates to the actual calorie content of the meals. Study 1 was conducted on 9 weekdays in three medium‐sized Midwestern U.S. cities. As they completed their meal, every fourth person was systematically approached and asked if they would answer some brief questions for a survey. No mention was made of food at that point. During this process, the interviewer unobtrusively recorded the type and size of the food and drinks from the wrappings left on the person’s tray. In case of uncertainty (e.g., to determine if the beverage was diet or regular), the interviewer asked for clarification from the respondents. Nutrition information provided by the restaurants was then used to compute the actual number of calories of each person’s meal. Of the 392 people who were approached while they were finishing a Subway meal, 253 (65%) agreed to participate. Of the 379 people who were approached while they were finishing a McDonald’s meal, 265 (70%) agreed to participate. To pretest the health positioning of McDonald’s and Subway, we asked 49 regular customers of both restaurants who were eating at Subway or McDonald’s to indicate their agreement with the sentence: “The food served here is healthy” on a nine‐point scale anchored at $1=\mathrm{strongly}\,$ disagree and $9=\mathrm{strongly}\,$ agree. As expected, Subway meals were rated as significantly more healthy ($M=6.2$ ) than McDonald’s meals ($M=2.4$ ; $F( 1,\: 49) =80$ , $p< .001$ ). ### Results To increase the comparability of McDonald’s and Subway meals, we restricted the analysis to the meals consisting of a sandwich, a soft drink, and a side order. This yielded a total of 320 meals (193 for McDonald’s and 127 for Subway). To test the hypothesis that calorie estimations are lower for Subway than for McDonald’s meals containing the same number of calories, we estimated the following regression via ordinary least squares: where ESTCAL is the estimated number of calories, HEALTHCLAIM is a binary variable taking the value of 1/2 for Subway meals and −1/2 for McDonald’s meals, ACTCAL is the mean‐centered actual number of calories of the meals, and ε is the error term. We included ACTCAL as a covariate because consumers tend to underestimate the calories of large meals (Chandon and Wansink 2007) and because McDonald’s meals tend to be bigger than Subway meals. As expected, the coefficient for HEALTHCLAIM was negative and statistically significant ($\beta =-151$ , $t=-3.6$ , $p< .001$ ). These participants believed that the meals from Subway contained an average of 151 fewer calories than a same‐calorie meal at McDonald’s. The regression parameters enable us to predict that, for a meal containing 1,000 calories, the mean calorie estimation will be 744 calories for someone eating at McDonald’s and only 585 calories (21.3% lower) for someone eating at Subway. The coefficient for ACTCAL and for the interaction (respectively, $\delta =.29$ , $t=4.7$ , $p< .001$ and $\lambda =-.12$ , $t=-.9$ , $p=.34$ ) indicated that consumers tended to underestimate calories more significantly for large meals than for small meals but that the effect of meal size is similar for both Subway and McDonald’s meals. The same results were obtained when using the percentage deviation ([estimated − actual]/actual) as the dependent variable ($\beta =-19.2$ , $t=-3.9$ , $p< .001$ ; $\delta =-.06$ , $t=-7.8$ , $p< .001$ ; and $\lambda =-.03$ , $t=-1.8$ , $p=.07$ ), indicating that the mean percentage deviation is more negative (more biased) for Subway meals than for McDonald’s meals containing the same number of calories. To illustrate the effects of health claims on calorie estimations for comparable meals, we computed the mean calorie estimate for small, medium, and large meals (categorized on the basis of actual number of calories). As shown in figure 1, mean calorie estimates were lower for Subway meals than for comparable McDonald’s meals in each size tier (for small meals, 473 vs. 563 calories, $F( 1,\: 106) =4.0$ , $p< .05$ ; for medium meals, 559 vs. 764 calories, $F( 1,\: 105) =9.1$ , $p< .01$ ; and for large meals, 646 vs. 843 calories, $F( 1,\: 103) =4.1$ , $p< .05$ ). Figure 1 Study 1: Calorie Estimations of Subway and Mcdonald’s Diners ### Discussion Study 1 examines the general health halo that leads people to believe that a 1,000‐calorie Subway meal contains 21.3% fewer calories than same‐calorie McDonald’s meals. It also shows that calorie estimations are not primarily driven by guilt or by self‐presentation goals, as this would have predicted lower‐calorie estimations by McDonald’s customers than by Subway customers. These results nonetheless raise two important questions that need to be addressed in subsequent studies. First, the results of study 1 might be caused by intrinsic differences between self‐selected Subway and McDonald’s diners.1 A second issue is that participants in study 1 evaluated only one McDonald’s or Subway meal. Their estimations might have been better calibrated if they had been asked to make multiple estimates or asked to compare meals instead of estimating a single meal. This is because consumers pay more attention to hard‐to‐evaluate attributes (such as calories) in joint evaluations than in separate evaluations (Hsee 1996). We address these issues in study 2 by using a within‐subjects design in which respondents estimate the calories contained in two small and large Subway and McDonald’s sandwiches containing the same number of calories. Study 2 also enables us to examine whether nutrition involvement can mitigate the biasing effects of health claims on calorie estimations. 1. 1 To explore this issue, we recontacted 58 participants who provided their telephone numbers and asked them to report their height and weight, which we used to compute their body mass index (BMI). Although we found no difference in body mass ($M=23.4$ kg/m2 for McDonald’s customers vs. $M=23.6$ kg/m2 for Subway customers, $F( 1,\: 56) =.1$ , $p=.76$ ), we cannot rule out that the groups may be different on other dimensions, such as involvement in nutrition. ## Study 2: Can Nutrition Involvement Mitigate the Halo Effects of Health Claims on Calorie Estimations? ### Method Study 2 used a 2 (health claims: Subway vs. McDonald’s) × 2 (actual number of calories: 330 vs. 600) within‐subjects design. It was conducted among University of Illinois students and staff members, who were given the opportunity to win a series of raffle prizes in exchange for their participation. We asked 316 of these consumers who had eaten at least three times at Subway and McDonald’s in the previous year to estimate the number of calories contained in two Subway sandwiches (a 6‐inch ham and cheese sandwich containing 330 calories and a 12‐inch turkey sandwich containing 600 calories) and in two McDonald’s burgers (a cheeseburger containing 330 calories and a Big Mac containing 600 calories). The ordering of the restaurants was counterbalanced across participants. Unlike in study 1, in which participants had ordered and consumed the food, participants in study 2 knew that they would not consume the food. To measure their nutrition involvement, we used a five‐item scale and asked respondents to indicate their agreement with these statements: “I pay close attention to nutrition information,” “It is important to me that nutrition information is available,” “I ignore nutrition information” (reverse coded), “I actively seek out nutrition information,” and “Calorie levels influence what I eat” on a nine‐point scale anchored at $1=\mathrm{strongly}\,$ disagree and $9=\mathrm{strongly}\,$ agree. The mean, median, and standard deviation of the scale were, respectively, 4.6, 4.5, and 2.1. After verifying the reliability ($\alpha =.85$ ) and unidimensionality of the scale (62% of the variance was extracted by the first principal component), we averaged the responses to the five items and categorized respondents into a low or high nutrition involvement group via a median split. ### Results We analyzed the data using a repeated‐measures ANOVA with two within‐subjects factors and one between‐subject factor. The two within‐subject factors were HEALTHCLAIM (which indicates whether food was from Subway or McDonald’s) and ACTCAL (which measured the actual number of calories of the food—330 or 600 calories). The between‐subject factor was NUTINV, which indicates whether respondents belonged to the high or low nutrition involvement group (similar results were obtained when using the continuous scale). We included all two‐way and three‐way interactions. Because the order of estimations had no effect on calorie estimations and did not interact with any of the other factors, we excluded this factor from the analysis reported here. The main effects of HEALTHCLAIM and ACTCAL and their interactions were all statistically significant (respectively, $F( 1,\: 314) =158$ , $p< .001$ ; $F( 1,\: 314) =468$ , $p< .001$ ; and $F( 1,\: 314) =72.5$ , $p< .001$ ). As shown in figure 2, calorie estimations were lower for Subway sandwiches than for McDonald’s sandwiches that contained the same number of calories. Furthermore, the halo effects of health claims were stronger for the sandwiches containing 600 calories ($M=-200$ calories, a 33% underestimation) than for smaller sandwiches containing 330 calories ($M=-80$ calories, a 24% underestimation). In addition, the main effect of nutrition involvement and its interaction with ACTCAL were both statistically significant (respectively, $F( 1,\: 314) =9.8$ , $p< .01$ and $F( 1,\: 314) =6.1$ , $p< .05$ ), indicating that respondents highly involved in nutrition had higher (more accurate) calorie estimations, especially for the larger sandwiches. As also expected, the interaction between NUTINV and HEALTHCLAIM and the three‐way interaction were not statistically significant (respectively, $F( 1,\: 314) =.9$ , $p=.34$ and $F( 1,\: 314) =.4$ , $p=.55$ ). This indicates that nutrition involvement did not reduce the biasing effects of the restaurant brands’ health positioning on consumers’ calorie estimations. Figure 2 Study 2: How Nutrition Involvement Influences Calorie Estimations for Subway and McDonald’s Sandwiches ### Discussion Study 2 shows that even consumers familiar with both restaurants estimate that Subway sandwiches contain significantly fewer calories than McDonald’s sandwiches containing the same number of calories. Study 2 therefore replicates the findings from study 1 in a repeated‐measures context. The within‐subjects design of study 2 allows us to rule out the alternative explanation that the results of study 1 were caused by self‐selection or by unobserved differences in the type of meals consumed in the two restaurants. Study 2 also shows that, although nutrition involvement improves the quality of calorie estimations, it does not reduce the halo effects of the restaurant brand’s health positioning. Taken together, studies 1 and 2 provide converging evidence that Subway and McDonald’s health claims bias consumers’ calorie estimations. In study 3, we examine the effects of these claims on consumers’ complementary food decisions. This also allows us to test the alternative explanation that the results of studies 1 and 2 are caused by simple response scaling biases, that is, that the health positioning of Subway and McDonald’s influenced only consumers’ calorie ratings, not their general estimation of the healthiness of the food. This would predict that health claims would have no impact on the decision to choose low‐ or high‐calorie side orders and drinks. Finally, by collecting calorie estimation data after the consumption decision task, study 3 tests whether health claims influence side‐dish purchase intentions even when people are not explicitly asked to estimate the caloric content of their main dishes. ## Study 3: Can Health Claims Lead Consumers to Unknowingly Choose Higher‐Calorie Side Orders and Drinks? Forty‐six undergraduate students were recruited on the campus of Northwestern University and were paid $2 to participate in this and another unrelated study. Half were given a coupon for a McDonald’s Big Mac sandwich, and the other half were given a coupon for a Subway 12‐inch Italian BMT sandwich. To provide a more conservative test of the effects of health claims on consumption decisions, the “healthy” food used in study 3 has actually 50% more calories than the “unhealthy” food (a 12‐inch Subway Italian BMT sandwich has 900 calories, and a Big Mac has 600 calories). We then gave the participants a menu and asked them to indicate what they would like to order with their sandwich, if anything. The menu included a small, medium, or large regular fountain drink (containing 155, 205, and 310 calories, respectively); a small, medium, or large diet fountain drink containing no calories; and one or two chocolate chip cookies (containing 220 calories per cookie). These items were chosen because they are the only side orders common to both McDonald’s and Subway. We then asked participants to estimate the number of calories contained in their sandwich, beverage, and cookies. Finally, we measured how important eating healthily is to them by asking them to indicate their agreement with three sentences (“Eating healthily is important to me,” “I watch how much I eat,” and “I pay attention to calorie information”) on a nine‐point scale anchored at$1=\mathrm{strongly}\,$disagree and$9=\mathrm{strongly}\,$agree. ### Results We first examine the total number of calories contained in the beverages and cookies that were ordered in the Subway and McDonald’s coupon condition. Compared to those who had received a Big Mac coupon, participants who received the Subway coupon were less likely to order a diet soda, more likely to upgrade to a larger drink, and more likely to order cookies. As a result, participants receiving a Subway coupon ordered side dishes and beverages containing more calories ($M=111$calories) than participants receiving a McDonald’s coupon ($M=48$calories;$F( 1,\: 44) =4.0$,$p< .05$; see fig. 3). Because the Subway sandwich also contained more calories than the McDonald’s sandwich, participants ended up with a meal containing 56% more calories ($M=1,011$calories) in the Subway coupon condition than in the McDonald’s coupon condition ($M=648$calories;$F( 1,\: 44) =132.9$,$p< .001$). Figure 3 Study 3: How Subway and McDonald’s Coupons Influence the Estimated and Actual Number of Calories (for the Main Sandwich, Side Orders, and the Whole Meal) We now examine whether participants receiving the Subway coupon realized they were ordering calorie‐rich side orders and whether they ended up with a much larger combined meal than those receiving the McDonald’s coupon. As shown in figure 3, calorie estimations for the side orders were similar for participants with the Subway coupon ($M=48$calories) and for participants with the Big Mac coupon ($M=43$calories;$F( 1,\: 44) < .1$,$p=.43$). Similarly, calorie estimations for the main sandwich were similar in both conditions ($M=439$calories for the 12‐inch Subway sandwich vs.$M=557$calories for the Big Mac;$F( 1,\: 44) =2.4$,$p=.13$). As a result, calorie estimations for the total meal were similar in the healthy prime condition ($M=487$calories) and in the unhealthy prime condition ($M=600$calories;$F( 1,\: 44) =1.9$,$p=.17$). Because the actual number of calories of the meal was significantly higher in the Subway (healthy prime) condition than in the McDonald’s (unhealthy prime) condition, the calorie underestimation was significantly larger in the healthy prime condition ($M_{( \mathrm{est.}\,-\mathrm{act.}\,\:\mathrm{cal.}\,) }=-524$calories, a 52% underestimation) than in the unhealthy prime condition ($M_{( \mathrm{est.}\,-\mathrm{act.}\,\:\mathrm{cal.}\,) }=-48$calories, a 7% underestimation;$F( 1,\: 44) =29.9$,$p< .001$). These results indicate that the actual increase in calories between the Subway and McDonald’s coupon conditions was not captured by consumers’ calorie estimations. We also examined the relationship between main‐dish calorie estimations and side‐dish purchase intentions. As expected, the correlation between the calorie estimation bias (measured as the difference between the actual and estimated number of calories in the sandwich) and the actual number of calories of the side dishes is negative and statistically significant ($r=-.36$,$p< .01$). This raises the question of whether the effects of health claims on complementary food decisions are mediated by biases in the estimation of the number of calories of the main sandwich. When entered alone in a regression of the actual number of calories contained in side dishes, the parameter of the binary variable capturing the coupon manipulation was statistically significant ($B=63.3$,$t=2.0$,$p< .05$). However, this parameter becomes insignificant when the calorie estimation bias is entered in the regression as a covariate ($B=23.7$,$t=.6$,$p=.56$). A Sobel test shows that the mediation effect is statistically significant ($z=2.32$,$p< .05$). Of course, this analysis cannot rule out the opposite causality link, that is, that participants adjusted their main‐dish calorie estimations to justify their side‐dish orders. In contrast, the analysis of the healthy eating data shows that health claim manipulation did not activate the goal of eating healthily. Respondents were as likely to agree with the three sentences (“Eating healthily is important to me,” “I watch how much I eat,” and “I pay attention to calorie information”) in both conditions (respectively,$F( 1,\: 44) =.4$,$p=.53$;$F( 1,\: 44) < .1$,$p=.94$; and$F( 1,\: 44) < .1$,$p=.86$). This shows that the effects of health claims on complementary food decisions are not mediated by the activation of healthy eating goals. ### Discussion Although the “healthy” Subway sandwich contained 50% more calories than the “unhealthy” Big Mac, consumers ordered higher‐calorie drinks and cookies when they received a coupon for the Subway sandwich than when they received a coupon for the Big Mac. Yet, the estimated caloric content of the side dishes was similar in both conditions (48 vs. 43 calories), leading to a 52% underestimation of the total number of calories contained in the “healthy” meal compared to an insignificant 7% underestimation for the “unhealthy” meal. Study 3 further contributes to studies 1 and 2 by showing that health claims influence side‐dish decisions and not just calorie estimations. This rules out the competing explanation that health halos influence calorie estimations only because of simple response biases. Another contribution of study 3 is that consumption effects were found even when consumers were not explicitly asked to estimate calories. This supports the finding of study 2 that health halo effects are robust, regardless of a consumer’s nutrition involvement. Third, study 3 shows that the impact that health claims have on side‐dish orders is not mediated by the activation of healthy eating goals. Instead, this suggests that it is mediated by the calorie estimations for the main dish. In study 4, we examine whether instructions to “consider the opposite” can reduce the effects of health halos on calorie estimations and on side‐dish choices. Study 4 also addresses some of the remaining issues raised by the results of studies 1–3. First, we manipulate health claims by changing the name of the restaurant and the menu while keeping the target food constant. Second, we test whether the results of studies 1–3 regarding estimations are driven by a lack of familiarity with calories by asking respondents to estimate the amount of meat contained in the sandwiches in ounces, a more familiar unit. Finally, we examine whether the parallel findings of study 3 for calorie estimations and side‐dish decisions hold in a between‐subjects design in which some participants are asked to choose complementary food while the others are asked to estimate the number of calories of the main dish of the meal. ## Study 4: Correcting the Effects of Health Claims on Main‐Dish Calorie Estimations and on Side‐Dish Choices ### Method Study 4 used a 2 (claims: healthy vs. unhealthy) × 2 (debiasing instructions: none or consider the opposite) × 2 (decision task: calorie estimation for the main dish or choice of side dish) between‐subjects design. We recruited 214 University of Illinois students in exchange for class credit and gave them a typical fast‐food menu, including the target sandwich and eight other food choices. The menu provided a short description of the food, prices, and calorie content (except for the target food). The target food was described as “our famous classic Italian sandwich, with Genoa salami, pepperoni, and bologna.” In the healthy prime condition, the name of the restaurant was “Good Karma Healthy Foods,” and the menu included healthy choices such as cream of carrot soup (90 calories) or an organic hummus platter (280 calories). In the unhealthy prime condition, the name of the restaurant was “Jim’s Hearty Sandwiches,” and the menu included high‐calorie foods such as “beef on a Kimmelweck roll” (800 calories) or a “sausage sandwich” (760 calories). In the questionnaire, we indicated that we were interested in food preferences, and we emphasized that there were no right or wrong answers. To ensure that participants studied the menu, we first asked them to rate the average price of the restaurant’s food. The participants then went to a location in the room where a 6‐inch Italian bologna sandwich was on a plate along with a 20‐ounce glass of Coca‐Cola Classic (clearly labeled). This meal contained 660 calories and was presented as having been ordered from “Good Karma Healthy Foods” restaurant or from “Jim’s Hearty Sandwiches” restaurant. Participants in the consider‐the‐opposite estimation strategy were then asked to “write down three reasons why the sandwich is not typical of the restaurant that offers it. That is, write down three reasons why this is a generic meal that could be on any restaurant menu.” Participants in the control condition received no further instructions. Participants in the estimation condition were then asked to write down the calories contained in this meal (the sandwich and the beverage) and the amount of meat in the sandwich (in ounces). Participants in the consumption condition were not asked to make any estimation but were asked instead to indicate their intention to order potato chips with this meal on a nine‐point scale anchored at$1=\mathrm{I}\,$wouldn’t want any chips and$9=\mathrm{I}\,$would want some chips. Because we were particularly interested in their consumption intentions, we assigned twice as many people to this condition as to the calorie estimation condition. On the last page of the questionnaire, we asked all the participants to rate how important healthy eating is to them by indicating their agreement with four sentences. Four of the participants guessed the general purpose of the study, and their answers were not included in the analyses reported here. ### Results To examine the effects of health claims and of the consider‐the‐opposite instructions, we conducted a series of ANOVAs with two independent variables: HEALTHCLAIM, a variable measuring whether participants received the healthy or unhealthy menu, and DEBIAS, a variable measuring whether participants were in the control or the consider‐the‐opposite condition. Looking at calorie estimations first, we found that the main effects of HEALTHCLAIM and of DEBIAS were not statistically significant (respectively,$F( 1,\: 65) =2.0$,$p=.16$and$F( 1,\: 65) =.1$,$p=.81$). However, the expected interaction between HEALTHCLAIM and DEBIAS was statistically significant ($F( 1,\: 65) =5.2$,$p< .05$). In the control condition, calorie estimations were significantly lower with the healthy menu ($M=409$calories, a 38% underestimation) than with the unhealthy menu ($M=622$calories, a 6% underestimation;$F( 1,\: 28) =7.5$,$p< .01$). In the consider‐the‐opposite condition, calorie estimations were essentially the same for the healthy menu ($M=526$calories, a 20% underestimation) as for the unhealthy menu ($M=477$calories, a 28% underestimation;$F( 1,\: 37) =.4$,$p=.55$; see fig. 4a). Figure 4 Study 4: How Health Claims and Debiasing Instructions Influence Calorie Estimations (A) and Side‐Order Consumption Intentions (B) To test whether the effects of health claims persist for familiar units, we conducted the same ANOVA but with respondents’ estimates of the amount of meat in the sandwich as the dependent variable. As for calorie estimations, the main effects of MENU and DEBIAS were not significant (respectively,$F( 1,\: 65) =1.6$,$p=.21$and$F( 1,\: 65) =.6$,$p=.42$), but their interaction was statistically significant ($F( 1,\: 65) =6.9$,$p< .05$). In the control condition, the estimated amount of meat was lower with the healthy menu ($M=3.4$ounces) than with the unhealthy menu ($M=5.5$ounces;$F( 1,\: 28) =4.9$,$p< .05$). In the consider‐the‐opposite condition, estimated weights were the same in both conditions ($M=5.2$ounces with the healthy menu and$M=4.8$ounces with the unhealthy menu;$F( 1,\: 37) =.3$,$p=.60$). Using the same ANOVA model as that used above, we analyzed the effects of health claims on consumption intentions (measured on a 1–9 scale) and found the same effects but in the expected opposite direction (see fig. 4b). The main effects of HEALTHCLAIM and of DEBIAS were not statistically significant (respectively,$F( 1,\: 141) =.3$,$p=.59$and$F( 1,\: 141) =1.9$,$p=.18$), but their interaction was statistically significant ($F( 1,\: 141) =4.2$,$p< .05$). In the control condition, intentions to consume chips were higher in the healthy menu condition ($M=7.2$) than in the unhealthy menu condition ($M=6.0$), although the difference was only marginally statistically significant ($F( 1,\: 54) =3.6$,$p< .06$). In the consider‐the‐opposite condition, however, consumption intentions were not statistically different between the healthy ($M=5.6$) and unhealthy ($M=6.3$) conditions ($F( 1,\: 83) =1.3$,$p=.26$). In the final analysis, we examined whether these results can be mediated by the activation of the goal of eating healthily. The ratings of respondents in the healthy and unhealthy menu conditions were not statistically different on any of the four sentences measuring healthy eating goals ($F( 1,\: 206) =.6$,$p=.42$for “I watch how much I eat”;$F( 1,\: 206) =2.0$,$p=.16$for “Eating healthily is important to me”;$F( 1,\: 206) =.5$,$p=.49$for “I pay attention to calorie information”; and$F( 1,\: 206) =.4$,$p=.50$for “Looking thin is very important to me”). These results show that the effects of health claims on calorie estimation and complementary food decisions are not mediated by the activation of healthy eating goals. ### Discussion The most important contribution of study 4 is that the health halo effects on main‐dish calorie estimation and side‐dish choices disappear when consumers consider arguments contradicting the health claims. In fact, the effects of health claims are slightly reversed when participants consider opposite arguments. Although this reversal is not statistically significant, its robustness for all dependent variables suggests that some overcorrection might be taking place. Study 4 also shows that manipulating the name of the restaurant and the type of food on the menu, while keeping the target meal constant, suffices to influence consumers’ choice of side orders and their estimation of the number of calories contained in a familiar meal consisting of a ham sandwich and a cola. These results show that the health halo effects found in studies 1–3 were not specific to the manipulation used (the Subway and McDonald’s brands) and can be relatively easily created from a restaurant name and the choice of other items on the menu. The findings of study 4 also rule out the alternative explanation that the results of studies 1–3 were driven by differences in food type in the healthy and unhealthy conditions or by the choice of unfamiliar units of measurement (calories). Study 4 also supports the findings of study 3 that health claims influence complementary consumption decisions even when people are not explicitly asked to estimate calories. Finally, study 4 provides more evidence on the interrelatedness of main‐dish calorie estimation and side‐dish choices by showing that they respond similarly, but in opposite directions, to health halos and consider‐the‐opposite manipulations. Next, we discuss the factors that may underlie these effects and their implications for the obesity debate. ## General Discussion The goal of our research is to help explain a particular facet of the American obesity paradox—the simultaneous increase in obesity and in the popularity of healthier fast‐food restaurants serving lower‐calorie foods. The results of four studies show that consumers estimate that familiar sandwiches and burgers contain up to 35% fewer calories when they come from restaurants claiming to be healthy, such as Subway, than when they come from restaurants not making this claim, such as McDonald’s. These findings are obtained when estimating single sandwiches as well as entire meals, before and after intake, and for familiar and unknown restaurant brands. Remarkably, the biasing effects of health claims on calorie estimations are as strong for consumers highly involved in nutrition as for consumers with little interest in nutrition or healthy eating. These results also hold when calories are measured in the field, as people are finishing their own meals, a context which should tempt consumers to minimize their calorie estimations in order to reduce their guilt or to look good in the eyes of the interviewers. Two studies further show that health claims lead people to unknowingly choose side dishes containing more calories and therefore enhance the chances of overeating because of undetected increases in calorie intake. We find that consumers chose beverages, side dishes, and desserts containing up to 131% more calories when the main course was positioned as “healthy” compared to when it was not—even though the “healthy” main course already contained 50% more calories than the “unhealthy” one. As a result, meals ordered from “healthy” restaurants can unknowingly contain more calories than meals ordered from “unhealthy” restaurants. These health claims influence the choice of side dishes even when consumers are not explicitly asked to estimate calories. Fortunately, we find that these biasing influences of health claims can be eliminated by prompting consumers to consider whether the opposite health claims may be true. ### Implications for Researchers These findings have implications for the literature on consumer self‐regulation and particularly for studies of the effects of goals on behavioral performance. Polivy and Herman (1985) coined the “what‐the‐hell” effect to describe the behavior of restrained eaters who overindulge when they exceed their daily calorie goal because they consider that the day is lost. The what‐the‐hell effect has been shown to occur for negatively framed goals, such as setting a daily calorie goal (Cochran and Tesser 1996) but not when the goal is framed as a gain or when the goal is distant (such as a weekly calorie goal). Further research could test whether the what‐the‐hell effect may moderate the effects of health claims on consumption. Because unhealthy meals are perceived to contain more calories than healthy meals, restrained eaters are more likely to think that they have exceeded their calorie goal when the food or restaurant is seen as “unhealthy” than when it is not. Restrained eaters are thus more likely to experience a “virtual what‐the‐hell” effect and to order more foods in unhealthy restaurants, which is just the opposite of how halo effects influence consumers. The net effect on calorie intake would then depend on the proportion of restrained eaters with violated calorie goals in each type of restaurant. The success of the consider‐the‐opposite debiasing strategy suggests that selective activation may underlie the effects of health claims on calorie estimations and consumption decisions. Our results also suggest that the influence that health halos have on one’s choice of a side dish may be mediated by main‐dish calorie estimates and not by feelings of guilt or by the activation of healthy eating goals. Further research is needed to replicate these findings and to rule out other potential explanations, such as simple priming effects caused by spreading activation, normative updating, or conversational norms. For example, the menus used in study 4 could be modified to include both healthy and unhealthy items. A selective accessibility explanation would predict that consumers will retrieve more healthy items from a restaurant with a healthy name (and more unhealthy items from a restaurant with an unhealthy name) and that the effect of the restaurant name on calorie estimates will be mediated by the frequency of the items retrieved. Incorporating a control (no prime) condition would also help to determine whether people assimilate their calorie estimates only toward the healthy restaurant, only toward the unhealthy restaurant, or both.2 More generally, more research is necessary to examine whether health claims have the same effects on prudent and impulsive consumers. Whereas most studies found that food temptations prime hedonic goals, Fishbach et al. (2003) found that they activate the overriding dieting goals among prudent consumers. Prudent and impulsive consumers also differ in how they respond to hedonic primes over time. Ramanathan and Menon (2006) found that hedonic primes increase preferences for unhealthy foods for both groups but that the preference for hedonic food persists only for impulsive consumers. Ramanathan and Williams (2007) further showed that balancing hedonic and utilitarian goals is more common among prudent consumers than impulsive consumers. Finally, it would be interesting to examine whether health halos influence not just single‐order consumption intentions but, like product stockpiling, can also influence the frequency of consumption (Chandon and Wansink 2002). 1. 2 We thank the reviewers for these suggestions. ### Implications for Managers, Policy Makers, and Consumers One focus of health professionals, public policy makers, and responsible marketers is to reduce overeating by proposing healthier meals. This is obviously commendable, and we must emphasize that our results by no means imply that people should avoid restaurants that, like Subway, offer healthier meals than their competitors. As shown in study 1, meals ordered at Subway contain, on average, fewer calories ($M=694$calories) than meals ordered at McDonald’s ($M=1,081$calories;$F( 1,\: 318) =134$,$p< .001\$ ). Still, our findings show that the public health benefits of healthier foods are at least partially negated by the halo effects of health claims that lead people to order calorie‐rich side dishes and beverages. More generally, some strategies to promote healthy eating result in finger‐pointing toward food indulgences. This can be counterproductive because temptations abound, and willpower is notoriously fallible. The risk is that this accusatory approach may lead to demotivation and create a backlash. Our findings suggest that another worthy public policy effort may be to help people to better estimate the number of calories they consume. There is nothing wrong with occasionally enjoying a high‐calorie meal as long as people recognize that they have had a lot of calories and that they need to adjust their future calorie intake or expenditure accordingly. In fact, countries with a more relaxed and hedonic attitude toward food, like France or Belgium, tend to have less serious obesity problems compared to the United States (Rozin et al. 1999). Reducing biases in calorie estimation is important because even small calorie underestimations can lead to substantial weight gain over the course of a year (Wansink 2006). For example, study 1 found that the mean estimation of a 1,000 calorie meal was 159 calories less if the meal was bought at Subway than if it was bought at McDonald’s. This difference can lead to substantial weight gain if people eating at Subway think that they have earned a 159 calorie credit that they can use toward eating other food. Given that a 3,500‐calorie imbalance over a year leads to a 1‐pound weight gain (Hill et al. 2003), an extra 159 calories will lead to an extra 4.9‐pound weight gain for people eating a 1,000 calorie meal at Subway twice a week compared to those eating a comparable meal at McDonald’s with the same frequency. Our findings regarding the robustness of health halos effects suggest that it is unlikely that consumers will learn to estimate calories from experience. In study 3, for example, meals were 56% larger when participants received a coupon for a Subway sandwich than when they received a coupon for a Big Mac, yet calorie estimations were 19% lower for the Subway meals than for the McDonald’s meals. What can be done to improve the accuracy of calorie estimation? Although one suggestion may be to make nutrition information mandatory in all restaurants, this is vigorously opposed by the restaurant industry on the grounds that it is impractical and anticommercial. Our findings on the effectiveness of the consider‐the‐opposite strategy suggest that a potentially less controversial solution would be to launch educational campaigns encouraging people to examine critically the health claims associated with various restaurants and foods in addition to evaluating the quality and quantity of the ingredients. Still, from a public health perspective, the best result would be achieved when people perceive all restaurants serving large portions of calorie‐dense foods, such as McDonald’s but also Subway, as an indulgence. Raising the accessibility of unhealthy primes would improve the accuracy of calorie estimations for fast‐food meals and would dissuade them from ordering calorie‐rich beverages and side dishes. ## References 1. Advertising Age (2005), “#1 Subway,” 76 (2), 16. 2. Andrews, J. Craig, Scot Burton, and Richard G. Netemeyer (2000), “Are Some Comparative Nutrition Claims Misleading? The Role of Nutrition Knowledge, Ad Claim Type and Disclosure Conditions,” Journal of Advertising, 29 (3), 29–42. 3. Andrews, J. Craig, Richard G. Netemeyer, and Scot Burton (1998), “Consumer Generalization of Nutrient Content Claims in Advertising,” Journal of Marketing, 62 (4), 62–75. 4. Arkes, Hal R. (1991), “Costs and Benefits of Judgment Errors: Implications for Debiasing,” Psychological Bulletin, 110 (3), 486–98. 5. Balasubramanian, Siva K. and Catherine Cole (2002), “Consumers’ Search and Use of Nutrition Information: The Challenge and Promise of the Nutrition Labeling and Education Act,” Journal of Marketing, 66 (3), 112–27. 6. Barrett, Jennifer (2003), “Fast Food Need Not Be Fat Food,” Newsweek, 142 (15), 73–74. 7. Baumeister, Roy F. (2002), “Yielding to Temptation: Self‐Control Failure, Impulsive Purchasing, and Consumer Behavior,” Journal of Consumer Research, 28 (4), 670–76. 8. Calorie Control Council National Consumer Surveys (2004), “The Use of Low‐Calorie Products in America,” Calorie Control Council, Atlanta, http://www.caloriecontrol.org/lcchart.html. 9. Chandon, Pierre and Brian Wansink (2002), “When Are Stockpiled Products Consumed Faster? A Convenience‐Salience Framework of Postpurchase Consumption Incidence and Quantity,” Journal of Marketing Research, 39 (3), 321–35. 10. ——— (2006), “How Biased Household Inventory Estimates Distort Shopping and Storage Decisions,” Journal of Marketing, 70 (4), 118–35. 11. ——— (2007), “Is Obesity Caused by Calorie Underestimation? A Psychophysical Model of Fast‐Food Meal Size Estimation,” Journal of Marketing Research, 44 (February), 84–99. 12. Chapman, Gretchen B. and Eric J. Johnson (1999), “Anchoring, Activation, and the Construction of Values,” Organizational Behavior and Human Decision Processes, 79 (2), 115–53. 13. Cochran, Winona and Abraham Tesser (1996), “The What the Hell' Effect: Some Effects of Goal Proximity and Goal Framing on Performance,” in Striving and Feeling: Interactions among Goals, Affect, and Self‐Regulation, ed. Leonard L. Martin and Abraham Tesser, Mahwah, NJ: Erlbaum, 99–120. 14. Cutler, David, Edward Glaeser, and Jesse Shapiro (2003), “Why Have Americans Become More Obese?” Journal of Economic Perspectives, 17 (3), 93–118. 15. Dhar, Ravi and Itamar Simonson (1999), “Making Complementary Choices in Consumption Episodes: Highlighting versus Balancing,” Journal of Marketing Research, 36 (1), 29–44. 16. Dhar, Ravi and Klaus Wertenbroch (2000), “Consumer Choice between Hedonic and Utilitarian Goods,” Journal of Marketing Research, 37 (February), 60–71. 17. Fiedler, Klaus (1996), “Explaining and Simulating Judgment Biases as an Aggregation Phenomenon in Probabilistic, Multiple‐Cue Environments,” Psychological Review, 103 (1), 193–214. 18. Fishbach, Ayelet and Ravi Dhar (2005), “Goals as Excuses or Guides: The Liberating Effect of Perceived Goal Progress on Choice,” Journal of Consumer Research, 32 (3), 370–77. 19. Fishbach, Ayelet, Ronald S. Friedman, and Arie W. Kruglanski (2003), “Leading Us Not into Temptation: Momentary Allurements Elicit Overriding Goal Activation,” Journal of Personality and Social Psychology, 84 (2), 296–309. 20. Food and Drug Administration (2006), “The Keystone Forum on Away‐from‐Home Foods: Opportunities for Preventing Weight Gain and Obesity,” http://www.docuticker.com/2006/06/keystone‐forum‐on‐away‐from‐home‐foods.html. 21. Food Marketing Institute (2005), Meeting the Needs of Family Health and Wellness, Washington DC: Food Marketing Institute. 22. Garretson, Judith A. and Scot Burton (2000), “Effects of Nutrition Facts Panel Values, Nutrition Claims, and Health Claims on Consumer Attitudes, Perceptions of Disease‐Related Risks, and Trust,” Journal of Public Policy and Marketing, 19 (2), 213–27. 23. Heini, Adrian F. and Roland L. Weinsier (1997), “Divergent Trends in Obesity and Fat Intake Patterns: The American Paradox,” American Journal of Medicine, 102 (3), 259–64. 24. Hill, James O., Holly R. Wyatt, George W. Reed, and John C. Peters (2003), “Obesity and the Environment: Where Do We Go from Here?” Science, 299 (February 7), 854–55. 25. Hsee, Christopher K. (1996), “The Evaluability Hypothesis: An Explanation for Preference Reversals between Joint and Separate Evaluations of Alternatives,” Organizational Behavior and Human Decision Processes, 67 (3), 247–57. 26. Johar, Gita Venkataram (1995), “Consumer Involvement and Deception from Implied Advertising Claims,” Journal of Marketing Research, 32 (3), 267–79. 27. Kardes, Frank R., Steven S. Posavac, and Maria L. Cronley (2004), “Consumer Inference: A Review of Processes, Bases, and Judgment Contexts,” Journal of Consumer Psychology, 14 (3), 230–56. 28. Keller, Scott B., Mike Landry, Jeanne Olson, Anne M. Velliquette, Scot Burton, and J. Craig Andrews (1997), “The Effects of Nutrition Package Claims, Nutrition Facts Panels, and Motivation to Process Nutrition Information on Consumer Product Evaluations,” Journal of Public Policy and Marketing, 16 (2), 256–79. 29. Kivetz, Ran and Itamar Simonson (2002), “Self‐Control for the Righteous: Toward a Theory of Precommitment to Indulgence,” Journal of Consumer Research, 29 (2), 199–217. 30. Kopelman, Peter G. (2000), “Obesity as a Medical Problem,” Nature, 404 (6778), 635–43. 31. Kozup, John C., Elizabeth H. Creyer, and Scot Burton (2003), “Making Healthful Food Choices: The Influence of Health Claims and Nutrition Information on Consumers’ Evaluations of Packaged Food Products and Restaurant Menu Items,” Journal of Marketing, 67 (2), 19–34. 32. Ledikwe, Jenny H., Julia A. Ello‐Martin, and Barbara J. Rolls (2005), “Portion Sizes and the Obesity Epidemic,” Journal of Nutrition, 135 (4), 905–9. 33. Livingstone, M. Barbara E. and Alison E. Black (2003), “Markers of the Validity of Reported Energy Intake,” Journal of Nutrition, 133 (3), S895–S920. 34. McKenzie, Debra C., Rachel K. Johnson, Jean Harvey‐Berino, and Beth Casey Gold (2002), “Impact of Interviewer’s Body Mass Index on Underreporting Energy Intake in Overweight and Obese Women,” Obesity Research, 10 (6), 471–77. 35. Moorman, Christine (1990), “The Effects of Stimulus and Consumer Characteristics on the Utilization of Nutrition Information,” Journal of Consumer Research, 17 (3), 362–74. 36. ——— (1996), “A Quasi Experiment to Assess the Consumer and Informational Determinants of Nutrition Information,” Journal of Public Policy and Marketing, 15 (1), 28–44. 37. Moorman, Christine, Kristin Diehl, David Brinberg, and Blair Kidwell (2004), “Subjective Knowledge, Search Locations, and Consumer Choice,” Journal of Consumer Research, 31 (3), 673–80. 38. Muhlheim, Lauren S., David B. Allison, Stanley Heshka, and Steven B. Heymsfield (1998), “Do Unsuccessful Dieters Intentionally Underreport Food Intake?” International Journal of Eating Disorders, 24 (3), 259–66. 39. Mussweiler, Thomas (2003), “Comparison Processes in Social Judgment: Mechanisms and Consequences,” Psychological Review, 110 (3), 472–89. 40. Mussweiler, Thomas, Fritz Strack, and Tim Pfeiffer (2000), “Overcoming the Inevitable Anchoring Effect: Considering the Opposite Compensates for Selective Accessibility,” Personality and Social Psychology Bulletin, 26 (9), 1142–50. 41. National Center for Health Statistics (2002), “Obesity Still on the Rise, New Data Show,” news release (October 8, 2002), http://www.cdc.gov/nchs/pressroom/02news/obesityonrise.htm. 42. Nestle, Marion (2003), “Increasing Portion Sizes in American Diets: More Calories, More Obesity,” Journal of the American Dietetic Association, 103 (1), 39–40. 43. Nielsen, Samara Joy and Barry M. Popkin (2003), “Patterns and Trends in Food Portion Sizes, 1977–1998,” Journal of the American Medical Association, 289 (4), 450–53. 44. Okada, Erica Mina (2005), “Justification Effects on Consumer Choice of Hedonic and Utilitarian Goods,” Journal of Marketing Research, 42 (1), 43–53. 45. Osselaer, Stijn, Suresh Ramanathan, Margaret Campbell, Joel Cohen, Jeannette Dale, Paul Herr, Chris Janiszewski, Arie Kruglanski, Angela Lee, Stephen Read, J. Russo, and Nader Tavassoli (2005), “Choice Based on Goals,” Marketing Letters, 16 (3/4), 335–46. 46. Polivy, Janet and C. Peter Herman (1985), “Dieting as a Problem in Behavioral Medicine,” in Advances in Behavioral Medicine, ed. Edward S. Katkin and Steven B. Manuck, New York: JAI, 1–37. 47. Putnam, Judy, Jane Allshouse, and Linda Scott Kantor (2002), “U.S. Per Capita Food Supply Trends: More Calories, Refined Carbohydrates, and Fats,” FoodReview, 25 (3), 2–15. 48. Raghunathan, Rajagopal, Rebecca Walker Naylor, and Wayne D. Hoyer (2006), “The Unhealthy = Tasty Intuition and Its Effects on Taste Inferences, Enjoyment, and Choice of Food Products,” Journal of Marketing, 70 (4), 170–84. 49. Ramanathan, Suresh and Geeta Menon (2006), “Time‐Varying Effects of Chronic Hedonic Goals on Impulsive Behavior,” Journal of Marketing Research, 43 (4), 628–41. 50. Ramanathan, Suresh and Patti Williams (2007), “Immediate and Delayed Emotional Consequences of Indulgence: The Moderating Influence of Personality Type on Mixed Emotions,” Journal of Consumer Research, 34 (2), electronically published May 30. 51. Ross, William T. and Elizabeth H. Creyer (1992), “Making Inferences about Missing Information: The Effects of Existing Information,” Journal of Consumer Research, 19 (1), 14–25. 52. Rozin, Paul, C. Fischler, S. Imada, A. Sarubin, and A. Wrzesniewski (1999), “Attitudes to Food and the Role of Food in Life in the U.S.A., Japan, Flemish Belgium and France: Possible Implications for the Diet‐Health Debate,” Appetite, 33 (2), 163–80. 53. Seiders, Kathleen and Ross D. Petty (2004), “Obesity and the Role of Food Marketing: A Policy Analysis of Issues and Remedies,” Journal of Public Policy and Marketing, 23 (2), 153–69. 54. Shiv, Baba and Alexander Fedorikhin (1999), “Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making,” Journal of Consumer Research, 26 (December), 278–92. 55. Strack, Fritz, Norbert Schwarz, Herbert Bless, Almut Kübler, and Michaela Wänke (1993), “Awareness of the Influence as a Determinant of Assimilation versus Contrast,” European Journal of Social Psychology, 23 (January–February), 53–62. 56. Talbot, Laura A., Jerome L. Fleg, and E. Jeffrey Metter (2003), “Secular Trends in Leisure‐Time Physical Activity in Men and Women across Four Decades,” Preventive Medicine, 37 (1), 52–60. 57. Tooze, Janet A., Amy F. Subar, Frances E. Thompson, Richard Troiano, Arthur Schatzkin, and Victor Kipnis (2004), “Psychosocial Predictors of Energy Underreporting in a Large Doubly Labeled Water Study,” American Journal of Clinical Nutrition, 79 (5), 795–804. 58. Wansink, Brian (1994), “Advertising’s Impact on Category Substitution,” Journal of Marketing Research, 31 (4), 505–15. 59. ——— (2003), “Overcoming the Taste Stigma of Soy,” Journal of Food Science, 68 (8), 2604–6. 60. ——— (2005), Marketing Nutrition—Soy, Functional Foods, Biotechnology, and Obesity, Champaign: University of Illinois Press. 61. ——— (2006), “Mindless Eating: Why We Eat More than We Think,” New York: Bantam‐Dell. 62. Wansink, Brian and Pierre Chandon (2006), “Can Low‐Fat' Nutrition Labels Lead to Obesity?” Journal of Marketing Research, 43 (4), 605–17. ## References 1. Advertising Age (2005), “#1 Subway,” 76 (2), 16. 2. Andrews, J. Craig, Scot Burton, and Richard G. Netemeyer (2000), “Are Some Comparative Nutrition Claims Misleading? The Role of Nutrition Knowledge, Ad Claim Type and Disclosure Conditions,” Journal of Advertising, 29 (3), 29–42. 3. Andrews, J. Craig, Richard G. Netemeyer, and Scot Burton (1998), “Consumer Generalization of Nutrient Content Claims in Advertising,” Journal of Marketing, 62 (4), 62–75. 4. Arkes, Hal R. (1991), “Costs and Benefits of Judgment Errors: Implications for Debiasing,” Psychological Bulletin, 110 (3), 486–98. 5. Balasubramanian, Siva K. and Catherine Cole (2002), “Consumers’ Search and Use of Nutrition Information: The Challenge and Promise of the Nutrition Labeling and Education Act,” Journal of Marketing, 66 (3), 112–27. 6. Barrett, Jennifer (2003), “Fast Food Need Not Be Fat Food,” Newsweek, 142 (15), 73–74. 7. Baumeister, Roy F. (2002), “Yielding to Temptation: Self‐Control Failure, Impulsive Purchasing, and Consumer Behavior,” Journal of Consumer Research, 28 (4), 670–76. 8. Calorie Control Council National Consumer Surveys (2004), “The Use of Low‐Calorie Products in America,” Calorie Control Council, Atlanta, http://www.caloriecontrol.org/lcchart.html. 9. Chandon, Pierre and Brian Wansink (2002), “When Are Stockpiled Products Consumed Faster? A Convenience‐Salience Framework of Postpurchase Consumption Incidence and Quantity,” Journal of Marketing Research, 39 (3), 321–35. 10. ——— (2006), “How Biased Household Inventory Estimates Distort Shopping and Storage Decisions,” Journal of Marketing, 70 (4), 118–35. 11. ——— (2007), “Is Obesity Caused by Calorie Underestimation? A Psychophysical Model of Fast‐Food Meal Size Estimation,” Journal of Marketing Research, 44 (February), 84–99. 12. Chapman, Gretchen B. and Eric J. Johnson (1999), “Anchoring, Activation, and the Construction of Values,” Organizational Behavior and Human Decision Processes, 79 (2), 115–53. 13. Cochran, Winona and Abraham Tesser (1996), “The What the Hell' Effect: Some Effects of Goal Proximity and Goal Framing on Performance,” in Striving and Feeling: Interactions among Goals, Affect, and Self‐Regulation, ed. Leonard L. Martin and Abraham Tesser, Mahwah, NJ: Erlbaum, 99–120. 14. Cutler, David, Edward Glaeser, and Jesse Shapiro (2003), “Why Have Americans Become More Obese?” Journal of Economic Perspectives, 17 (3), 93–118. 15. Dhar, Ravi and Itamar Simonson (1999), “Making Complementary Choices in Consumption Episodes: Highlighting versus Balancing,” Journal of Marketing Research, 36 (1), 29–44. 16. Dhar, Ravi and Klaus Wertenbroch (2000), “Consumer Choice between Hedonic and Utilitarian Goods,” Journal of Marketing Research, 37 (February), 60–71. 17. Fiedler, Klaus (1996), “Explaining and Simulating Judgment Biases as an Aggregation Phenomenon in Probabilistic, Multiple‐Cue Environments,” Psychological Review, 103 (1), 193–214. 18. Fishbach, Ayelet and Ravi Dhar (2005), “Goals as Excuses or Guides: The Liberating Effect of Perceived Goal Progress on Choice,” Journal of Consumer Research, 32 (3), 370–77. 19. Fishbach, Ayelet, Ronald S. Friedman, and Arie W. Kruglanski (2003), “Leading Us Not into Temptation: Momentary Allurements Elicit Overriding Goal Activation,” Journal of Personality and Social Psychology, 84 (2), 296–309. 20. Food and Drug Administration (2006), “The Keystone Forum on Away‐from‐Home Foods: Opportunities for Preventing Weight Gain and Obesity,” http://www.docuticker.com/2006/06/keystone‐forum‐on‐away‐from‐home‐foods.html. 21. Food Marketing Institute (2005), Meeting the Needs of Family Health and Wellness, Washington DC: Food Marketing Institute. 22. Garretson, Judith A. and Scot Burton (2000), “Effects of Nutrition Facts Panel Values, Nutrition Claims, and Health Claims on Consumer Attitudes, Perceptions of Disease‐Related Risks, and Trust,” Journal of Public Policy and Marketing, 19 (2), 213–27. 23. Heini, Adrian F. and Roland L. Weinsier (1997), “Divergent Trends in Obesity and Fat Intake Patterns: The American Paradox,” American Journal of Medicine, 102 (3), 259–64. 24. Hill, James O., Holly R. Wyatt, George W. Reed, and John C. Peters (2003), “Obesity and the Environment: Where Do We Go from Here?” Science, 299 (February 7), 854–55. 25. Hsee, Christopher K. (1996), “The Evaluability Hypothesis: An Explanation for Preference Reversals between Joint and Separate Evaluations of Alternatives,” Organizational Behavior and Human Decision Processes, 67 (3), 247–57. 26. Johar, Gita Venkataram (1995), “Consumer Involvement and Deception from Implied Advertising Claims,” Journal of Marketing Research, 32 (3), 267–79. 27. Kardes, Frank R., Steven S. Posavac, and Maria L. Cronley (2004), “Consumer Inference: A Review of Processes, Bases, and Judgment Contexts,” Journal of Consumer Psychology, 14 (3), 230–56. 28. Keller, Scott B., Mike Landry, Jeanne Olson, Anne M. Velliquette, Scot Burton, and J. Craig Andrews (1997), “The Effects of Nutrition Package Claims, Nutrition Facts Panels, and Motivation to Process Nutrition Information on Consumer Product Evaluations,” Journal of Public Policy and Marketing, 16 (2), 256–79. 29. Kivetz, Ran and Itamar Simonson (2002), “Self‐Control for the Righteous: Toward a Theory of Precommitment to Indulgence,” Journal of Consumer Research, 29 (2), 199–217. 30. Kopelman, Peter G. (2000), “Obesity as a Medical Problem,” Nature, 404 (6778), 635–43. 31. Kozup, John C., Elizabeth H. Creyer, and Scot Burton (2003), “Making Healthful Food Choices: The Influence of Health Claims and Nutrition Information on Consumers’ Evaluations of Packaged Food Products and Restaurant Menu Items,” Journal of Marketing, 67 (2), 19–34. 32. Ledikwe, Jenny H., Julia A. Ello‐Martin, and Barbara J. Rolls (2005), “Portion Sizes and the Obesity Epidemic,” Journal of Nutrition, 135 (4), 905–9. 33. Livingstone, M. Barbara E. and Alison E. Black (2003), “Markers of the Validity of Reported Energy Intake,” Journal of Nutrition, 133 (3), S895–S920. 34. McKenzie, Debra C., Rachel K. Johnson, Jean Harvey‐Berino, and Beth Casey Gold (2002), “Impact of Interviewer’s Body Mass Index on Underreporting Energy Intake in Overweight and Obese Women,” Obesity Research, 10 (6), 471–77. 35. Moorman, Christine (1990), “The Effects of Stimulus and Consumer Characteristics on the Utilization of Nutrition Information,” Journal of Consumer Research, 17 (3), 362–74. 36. ——— (1996), “A Quasi Experiment to Assess the Consumer and Informational Determinants of Nutrition Information,” Journal of Public Policy and Marketing, 15 (1), 28–44. 37. Moorman, Christine, Kristin Diehl, David Brinberg, and Blair Kidwell (2004), “Subjective Knowledge, Search Locations, and Consumer Choice,” Journal of Consumer Research, 31 (3), 673–80. 38. Muhlheim, Lauren S., David B. Allison, Stanley Heshka, and Steven B. Heymsfield (1998), “Do Unsuccessful Dieters Intentionally Underreport Food Intake?” International Journal of Eating Disorders, 24 (3), 259–66. 39. Mussweiler, Thomas (2003), “Comparison Processes in Social Judgment: Mechanisms and Consequences,” Psychological Review, 110 (3), 472–89. 40. Mussweiler, Thomas, Fritz Strack, and Tim Pfeiffer (2000), “Overcoming the Inevitable Anchoring Effect: Considering the Opposite Compensates for Selective Accessibility,” Personality and Social Psychology Bulletin, 26 (9), 1142–50. 41. National Center for Health Statistics (2002), “Obesity Still on the Rise, New Data Show,” news release (October 8, 2002), http://www.cdc.gov/nchs/pressroom/02news/obesityonrise.htm. 42. Nestle, Marion (2003), “Increasing Portion Sizes in American Diets: More Calories, More Obesity,” Journal of the American Dietetic Association, 103 (1), 39–40. 43. Nielsen, Samara Joy and Barry M. Popkin (2003), “Patterns and Trends in Food Portion Sizes, 1977–1998,” Journal of the American Medical Association, 289 (4), 450–53. 44. Okada, Erica Mina (2005), “Justification Effects on Consumer Choice of Hedonic and Utilitarian Goods,” Journal of Marketing Research, 42 (1), 43–53. 45. Osselaer, Stijn, Suresh Ramanathan, Margaret Campbell, Joel Cohen, Jeannette Dale, Paul Herr, Chris Janiszewski, Arie Kruglanski, Angela Lee, Stephen Read, J. Russo, and Nader Tavassoli (2005), “Choice Based on Goals,” Marketing Letters, 16 (3/4), 335–46. 46. Polivy, Janet and C. Peter Herman (1985), “Dieting as a Problem in Behavioral Medicine,” in Advances in Behavioral Medicine, ed. Edward S. Katkin and Steven B. Manuck, New York: JAI, 1–37. 47. Putnam, Judy, Jane Allshouse, and Linda Scott Kantor (2002), “U.S. Per Capita Food Supply Trends: More Calories, Refined Carbohydrates, and Fats,” FoodReview, 25 (3), 2–15. 48. Raghunathan, Rajagopal, Rebecca Walker Naylor, and Wayne D. Hoyer (2006), “The Unhealthy = Tasty Intuition and Its Effects on Taste Inferences, Enjoyment, and Choice of Food Products,” Journal of Marketing, 70 (4), 170–84. 49. Ramanathan, Suresh and Geeta Menon (2006), “Time‐Varying Effects of Chronic Hedonic Goals on Impulsive Behavior,” Journal of Marketing Research, 43 (4), 628–41. 50. Ramanathan, Suresh and Patti Williams (2007), “Immediate and Delayed Emotional Consequences of Indulgence: The Moderating Influence of Personality Type on Mixed Emotions,” Journal of Consumer Research, 34 (2), electronically published May 30. 51. Ross, William T. and Elizabeth H. Creyer (1992), “Making Inferences about Missing Information: The Effects of Existing Information,” Journal of Consumer Research, 19 (1), 14–25. 52. Rozin, Paul, C. Fischler, S. Imada, A. Sarubin, and A. Wrzesniewski (1999), “Attitudes to Food and the Role of Food in Life in the U.S.A., Japan, Flemish Belgium and France: Possible Implications for the Diet‐Health Debate,” Appetite, 33 (2), 163–80. 53. Seiders, Kathleen and Ross D. Petty (2004), “Obesity and the Role of Food Marketing: A Policy Analysis of Issues and Remedies,” Journal of Public Policy and Marketing, 23 (2), 153–69. 54. Shiv, Baba and Alexander Fedorikhin (1999), “Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making,” Journal of Consumer Research, 26 (December), 278–92. 55. Strack, Fritz, Norbert Schwarz, Herbert Bless, Almut Kübler, and Michaela Wänke (1993), “Awareness of the Influence as a Determinant of Assimilation versus Contrast,” European Journal of Social Psychology, 23 (January–February), 53–62. 56. Talbot, Laura A., Jerome L. Fleg, and E. Jeffrey Metter (2003), “Secular Trends in Leisure‐Time Physical Activity in Men and Women across Four Decades,” Preventive Medicine, 37 (1), 52–60. 57. Tooze, Janet A., Amy F. Subar, Frances E. Thompson, Richard Troiano, Arthur Schatzkin, and Victor Kipnis (2004), “Psychosocial Predictors of Energy Underreporting in a Large Doubly Labeled Water Study,” American Journal of Clinical Nutrition, 79 (5), 795–804. 58. Wansink, Brian (1994), “Advertising’s Impact on Category Substitution,” Journal of Marketing Research, 31 (4), 505–15. 59. ——— (2003), “Overcoming the Taste Stigma of Soy,” Journal of Food Science, 68 (8), 2604–6. 60. ——— (2005), Marketing Nutrition—Soy, Functional Foods, Biotechnology, and Obesity, Champaign: University of Illinois Press. 61. ——— (2006), “Mindless Eating: Why We Eat More than We Think,” New York: Bantam‐Dell. 62. Wansink, Brian and Pierre Chandon (2006), “Can Low‐Fat' Nutrition Labels Lead to Obesity?” Journal of Marketing Research, 43 (4), 605–17.
2016-02-06 23:34:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34299299120903015, "perplexity": 12840.84386035353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148402.62/warc/CC-MAIN-20160205193908-00113-ip-10-236-182-209.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/19212/why-does-electron-velocity-increase-with-the-increase-of-atomic-number
# Why does electron velocity increase with the increase of atomic number? I know that for atoms with big values of Z, relativistic effects have to be applied. Why does electron velocity increase with the increase of atomic number? You can easily understand that based on the particle in the box model. Large Z corresponds to large electrostatic field around the nucleus which act on the electrons. This leads to a smaller "box", ie. the core electrons of heavy elements are moving on orbitals with radius orbitals than electrons of light elements. Since the energy of the electron is (if we use the simple 1D version) $$E = \frac{n^2h^2}{8mL^2}$$ where $L$ corresponds to the box size and the energy ($E$) is purely kinetic energy. You can see that smaller box corresponds to larger kinetic energies. Why does electron velocity increase with the increase of atomic number? Newton's second law tells us that for an electron moving in a stable circular orbit around a nucleus the centripetal force pushing out must equal the electrostatic attraction pulling in, or $${\frac{m_e~ v^2}{r}} = {\frac{Z~k_e ~e^2}{r^2}}~~(1)$$ where $m_e$ is the electron's mass, e is the electron charge, $k_e$ is Coulomb's constant and $Z$ is the atom's atomic number. Rearranging equation (1) yields $$v = \sqrt{ Z~k_e~ e^2 \over m_\mathrm{e} r}~~(2)$$ From equation (2) we can see that the electron velocity increases as the atomic number, $Z$, increases. Edit: Response to OP's comment I am expecting some quantum mechanical explanation As the electron velocity given by equation (2) becomes comparable to the speed of light, the mass of the electron increases due to relativistic effects. $$m_{rel}=\frac{m_{e}}{\sqrt{1-(v_e/c)^2}}$$ As the electron mass increases, the s (and somewhat for p) orbital radius contracts. So relativistic effects affect the electron mass and the orbital radius. In terms of electron velocity, these 2 relativistic effects offset one another and the electron velocity is not affected; equation (2) is still valid. • I am expecting some quantum mechanical explanation. – EJC Nov 6 '14 at 17:03 • The quantum mechanical response is similar. The potential energy of the electron includes the electrostatic interaction between the nuclear charge Z and the electron. Even ignoring the "stable circular orbit" idea, there is a balance between the angular momentum and the electrostatic attraction and the remaining equations are similar. Nov 6 '14 at 20:45
2021-09-18 19:49:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6189827919006348, "perplexity": 325.9399899650571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056572.96/warc/CC-MAIN-20210918184640-20210918214640-00453.warc.gz"}
http://mathhelpforum.com/discrete-math/115583-need-help-proof-dealing-functions.html
# Math Help - Need help with a proof dealing with functions 1. ## Need help with a proof dealing with functions Let f:A--->B be a function. Prove that f is surjective if and only if f^(-1)(W) does not equal the empty set for all nonempty sets W of B. I really do not know what to do with this. Thanks for your help everyone 2. Originally Posted by steph3824 Let f:A--->B be a function. Prove that f is surjective if and only if f^(-1)(W) does not equal the empty set for all nonempty sets W of B. Because $f$ is sujective $\left( {\forall b \in B} \right)\left( {\exists a \in A} \right)\left[ {f(a) = b} \right]$. Is it possible for $f^{-1}(\{b\})$ to be empty? Likewise if $\left( {\forall b \in B} \right)\left[ {f^{ - 1} (\{ b\} ) \ne \emptyset } \right]$ must $f$ be surjective?
2015-05-25 20:47:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7361666560173035, "perplexity": 346.3496594964623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928586.49/warc/CC-MAIN-20150521113208-00085-ip-10-180-206-219.ec2.internal.warc.gz"}
https://blog.dotsmart.net/2008/06/12/solved-cannot-read-from-the-source-file-or-disk/?replytocom=762
# Solved: “Cannot read from the source file or disk” I’ve finally solved a problem that’s been bugging me for years. One of our file shares ended up with several undelete-able files. Attempting to delete them results in “Error Deleting File or Folder – Cannot delete file: Cannot read from the source file or disk“. Note: Windows 7’s version of this message is something like: Could not find this item: This is no longer located in C:Blah. Verify the item’s location and try again. Even going to the file’s properties to check permissions presented a very blank properties dialog. And a CHKDSK didn’t sort thing out either. It turns out the problem was: the filename ended with a dot, e.g. it was something like “C:TempStuffSales Agreement.“. As far as Windows is concerned this is an invalid file name: so although it gets reported in a directory listing, the standard Windows APIs for manipulating files subsequently deny its existence. So how did this file get created in the first place? The answer: a Mac. The file was on a file share which had been accessed by a Mac user. Macs tend to write all sorts of metadata to extra “._DSStore” files and suchlike and had left this file behind. So if Windows doesn’t appear to allow these file names, how did they get to be created? Well, it turns out that NTFS allows all sort of file name/path weirdness that Windows, or specifically the Win32 API, doesn’t allow. For example, NTFS actually allows file paths up to 32K but Windows restricts file paths to no more than 260 characters (MAX_PATH). I suppose this is all for DOS/Windows 9x backwards compatibility. As these files were being accessed over a file share I guess the usual Win32 checks are bypassed. But thankfully you can get Win32 to ignore these checks by prefixing your file paths with \?, (ie. C:TempSomeFile.txt becomes \?C:TempSomeFile.txt) which I discovered after reading this blog post about long paths in .NET. So at a command prompt (Start > All Programs > Accessories > Command Prompt) I was able to delete the file using: del "\?C:TempStuffSales Agreement." Note: On Windows 7 it seems you can just use wildcards without the \? trick to delete the offending files: e.g. del c:tempsomefil* If it’s a folder/directory you’re trying to delete use the rd or rmdir command, e.g.: rd /s "\?C:Documents and SettingsUserDesktopAnnoying Folder." Tip: as you’re typing the file/directory name use the TAB key to auto-complete the name (press TAB repeatedly to cycle through possible names). Of course the corollary of all of this is that you could really annoy somebody by doing this: echo Hi > "\?%USERPROFILE%DesktopAnnoying file you can't delete." But you wouldn’t do that would you? If this post helped you and you feel so inclined, feel free to buy me a beer 🙂 ## 800 thoughts on “Solved: “Cannot read from the source file or disk”” 1. At the end of reading this, I actually glanced over to the side of my desktop to see if there was a file there called ‘Annoying file you can’t delete.’ It was entirely involuntary. Clearly on a subconscious level I have deep trust issues. 1. jasm says: HI Im getting the below error message “Cannot delete file: Cannot read from source file or disk” C:\>del “\\?\C:\Documents and Settings\TEMP\Favourites\bò┌ⁿ8g¼o.╦╧∞” The system cannot find the file specified. C:\> rd /s “\\?\C:\Documents and Settings\TEMP\Favourites2|ì>⌡ƪ.▄rî” \\?\C:\Documents and Settings\TEMP\Favourites2|ì>)ƪ._rî, Are you sure (Y/N)? Y The filename, directory name, or volume label syntax is incorrect. Thanks Jasm 1. Ahmet says: hi, you better try this: del “\\?\%USERPROFILE%\Desktop\[copy and paste your annoying file name]” hope it works. cheers! Dear Sir, 3. I’m having the same problem. I’m deleting it from a device connected at my computer. it still can’t be found. It’s from my phone. 4. gian says: help plzzz i got this Microsoft Windows XP [Version 5.1.2600] C:\Documents and Settings\Pc>del “\\?\C:\Temp\Stuff\Minecraft The system cannot find the path specified. C:\Documents and Settings\Pc>\ ‘\’ is not recognized as an internal or external command, operable program or batch file. C:\Documents and Settings\Pc>del D:\Minecraft D:\Minecraft\*, Are you sure (Y/N)? D:\Minecraft\*, Are you sure (Y/N)? y D:\Minecraft\░;┌1┤2ô≡.óñ\ The system cannot find the file specified. D:\Minecraft\ÅΩΩwñª╖Ω.╘cD The system cannot find the file specified. D:\Minecraft\¡╦▲◄>jç╛.eé♣ The filename, directory name, or volume label syntax is incorrect. D:\Minecraft\Σc ½♦PÇ◄._εε The filename, directory name, or volume label syntax is incorrect. C:\Documents and Settings\Pc>del “\\?\%USERPROFILE%\Desktop\D:\Minecraft The filename, directory name, or volume label syntax is incorrect. C:\Documents and Settings\Pc>del D:\Minecraft D:\Minecraft\*, Are you sure (Y/N)? y D:\Minecraft\░;┌1┤2ô≡.óñ\ The system cannot find the file specified. D:\Minecraft\ÅΩΩwñª╖Ω.╘cD The system cannot find the file specified. D:\Minecraft\¡╦▲◄>jç╛.eé♣ The filename, directory name, or volume label syntax is incorrect. D:\Minecraft\Σc ½♦PÇ◄._εε The filename, directory name, or volume label syntax is incorrect. C:\Documents and Settings\Pc> 2. Alec says: Praise be to those smarter than I am. That file has been a blight on my desktop for weeks but it just now started bothering me to the point where I was moved to try and find an answer. Thank you! 1. climb says: ditto. The internet is a wonderful place 2. tony b says: Also look for blank at the end of the files not just a dot. it also works perfectly thanks!!!. 3. JB says: Yes it does work for a blank at the end— good note. I would have dispaired without that comment… 4. JM says: I had the same problem, folders created by a Robocopy crash. Turned out the folder names ended with a space. The \\? method worked to delete these – thanks so much. 3. Alex says: Hi, I am experiencing this same error in MOSS but when I try to rename a file through the explorer view of a document library. To let you know, the file has a ‘+’ character, was brought into MOSS through drag and drop in the explorer window and is also part way through a workflow. Not knowing the origins of the document can I assume that this is occuring due to the same reason? Thanks, Alex. 1. Chris says: I am having this same problem, have you found a solution Alex? 2. Duncan Smart says: Chris, Sharepoint stores everything in a SQL database rather than the filesystem. Maybe you can rename it via the web-based UI instead of the explorer view. Or as a last resort – use SQL Server Enterprise manager to track down the database record and rename it there? 3. Chris says: Hey Duncan, You can rename these files through the Web-UI perfectly. We are moving to Sharepoint as a complete document store, and it is only on the odd occasion taht we find this problem. Renaming or trying to delete a file through the explorer view (which we have mapped to a drive) has no effect, then trying to do it again gives the error descibed: cannot read from source file or disk. It’s so frustrating. We don’t have workflow or any sort of approval or version history on this library. 4. RogerRabbitsClone says: HAHAHAHAHA! oh man, the first comment i read this. i laughed so hard i almost fell over. 5. vampelle says: THX THX THX U. i finally able to delete the stupid folder which may have been stopping from defrag my drive 6. jv says: I have the same issue as jasm, but my files are on a USB drive. The file names are all garbled up. Any suggestion? Thanks, JV 7. Surendra Panda says: I am having the same above issue but with a flash drive… the file names are bit weired & I tried using the above method to delete them but didn’t work File names are also unusual like some codes, please provide me a email address where I can send you the screenshot. 1. Riley Hensly says: If you have a solution to this problem please share it this way. I’m having the same problem on my flash drive. THANKS! 2. Duncan Smart says: Simpler solution may be to copy the data you want to keep off you flash drive and re-format the flash drive (right click it and choose Format…). 3. Riley Hensly says: Thanks Duncan – will reformat at this point. I had the same issue and could not delete the file, so instead i removed all other files from that folder, then deleted the folder using: rd /s “\\?\C:\Documents and Settings\User\Desktop\Annoying Folder.” (which I found in this article). 8. sheth says: james, Thanks for the code..it worked..gr8 work.. Thanks, Sheth.. 9. Debra Carver says: I just want to Thank You for this awesome answer to the question I had with getting the cannot delete file and cannot read from source. I was able to get rid of the pesky fie. Thanks for everything. Debra 10. Hi, my problem was relevant to copying .DAT from disc to local drive, that was indeed not strived to be solved here. I would like to purvey my observational solution to this problem, you can just add it up in winzip archive and that will do eventuate it compressed form by then you can move it around your local hard drive. Best regards, Rizwan. 11. Pj says: Thank you! I recently gave some files crazy names ending in dots and they were bugging me, just sat there on my desktop. I followed your advice and it worked like a dream. I used the right arrow key to just keep writing the same command line and then repeatedly hit the tab key – as you suggested – to go through my (previously tidied) desktop. Thanks very much, this had been bugging me for a whole week and I haven’t got much patience! Great idea to send that file to someone BTW. There’s another file that you can’t ever make.. and I forgot what it’s called. Maybe it was DELETE.???, I bet you know. (Saw it once on a website and tested it, it’s a window default thingy)… 1. Help Pj says: The folder name is : CON 12. vivek says: it worked! Thanx 2. INTRIGUING. Thanks for the heads up on that Duncan. 3. Tatyana says: I’ve inherited a bunch of files ending in “.” and had no idea how to get rid of them, and now this simple and elegant solution. Thank you very much, Duncan! 1. vegiVamp says: A good solution to the problem, certainly, but you have to be a pretty major ms fanboi to call this ‘simple and elegant’ instead of ‘a crappy problem that should’ve been fixed at the api level ten years ago’. 1. jack says: To be sure, all systems, including Windows, contain annoyances. Then again, you have to be a pretty major linux or apple fanboi not to notice the incredible power and ease of use in the Windows-Office ecosystem. As a person who has and supports all three, I’ll take Windows on my personal device any day, even with this stupid error. Uh, no, I don’t work for microsoft. 4. Tech Guy says: Sweeeeeet! Worked like a charm. I had tried booting in safe mode and deleting, MoveOnBoot, and toying with permissions, none of which worked. The files did indeed come from a Mac and in my case it wasn’t a period on the end, but something that looked like a space. I highlighted the name of the offending file, copied it, got in the command prompt window, typed your command, (del “\\?\C:\folder name\sub folder name\), then right-clicked to get the paste function in the command prompt window and typed the close quote. That way, no matter what the offending character is, there’s no guess work. Thank you! 1. I know this conversation is over a year old, but I had to reply: Tech Guy, you are a genius. I used your copy/paste idea, and the offending file is gone! Thank you for sharing. 1. Joe says: OH MY GOD, THANK YOU! 2. Sandy Lee says: I have TWO folders on my desktop that WILL NOT DELETE that says cannot locate source just like above. I have tried everything. I did what you said exactly and it keeps coming up on the command prompt window right after what I copied and past…more? Please help me get rid of these two folders which are empty at 0 bytes. Thank you!!! 2. Jordi says: Omg! I kept trying to type it into the command prompt as ctrl+v didn’t work, and I didn’t think you could right click haha. I also wasn’t aware I had something that looked like a space in the file names either, so it wasn’t working! Now it did :D! Thank you! 3. Dee says: Yessssssssss….this worked and it was easy. Thank You!!!!! 5. Nation says: Very helpful! Glad I found this via Google. My problem was a folder name generated by a program. I tried to move it, rename it, etc — no dice. In my case the last character in the name appeared to be a space. I couldn’t seem to rename it with this trick (got syntax error – sorry, I didn’t record exact error message). So I copied the contents into another folder, and then deleted the bad folder ( rd “\\?\c:\folder\folder\bad name”). It went away. Some where in some manual the exact syntax rules of these DOS commands is documented, I’m sure. 6. Nation says: Oh, and I couldn’t resist trying your evil suggestion. Yes, it worked on my (Win XP NTFS) system. And the same method deleted it. There may be some hacker potential for this method of creating (practically) undeletable files. 7. Aris says: Great! It worked like magic. Thanks. 🙂 8. borg says: ok so im not very computer savy and i still cannot get a file on my desktop to delete it gives me this same error 1. Jim says: I would just like to thank you Lindsay for posting this information on the program Unlocker! It deleted that annoying file right off of my desktop with easy! Thanks Again Lindsay!! 1. Tim says: Trojan virus detected on unlocker 2. D says: Well I just came up here to thank Lindsay too for posting the link to this marvelous program, Unlocker, which has worked like a charm on some immovable icons that have been bugging me all year. But here I see Tim is telling us that there’s a Trojan virus on it Is this true?? Or is he fear mongering? 2. lemon says: tnx a lot^^ the UNLOCKER did remove the 2 files on my desktop. i have tried to delete these 2 in command prompt but it just cnt be deleted there. the commnd prompt did delete my other files in my desktop. just for a history, these 2 files appeared after i cleaned my hard disk. actually, i have deleted these 2 files already. they r torrent files and has a long file name and a SPACE. i just dont know which of these four made them appear again in my desktop. 1. registry mechanic 2. pc doctor 3. ccleaner 4. purera and the reason why i cleaned my hard disk is bec. the windows update just downloaded every update and filled my harddisk.now i turned the automatic updates off.not sure if it is a good thing to turn it off. -THE END- 3. Nick says: I would also like to thank Lindsay for posting the link to unlocker. It did NOT have a trojan like another poster said. It could not unlock the file, but gave me the option to delete it anyway. Thank you, Thank You. Nick 4. Mike Lim says: I know this comment is years old, but thank you so much! I was able to delete a file from my external hard drive using Unlocker that I couldn’t delete using the method in this article. Thanks again! 9. DRev says: Thank you. Very much. 10. Jimmy Jim says: I just wanted to thank you for posting this. It helped me get rid of an annoying file on my desktop 🙂 11. Korvus Redmane says: Thanks. I had a file which i think was some kinda deleated download which i couldn’t remove, but this worked. 12. Edward says: So I have little experience with command prompt. How exactly do I go about prefixing the file path to the file I want to delete? 13. Jason says: doesn’t work with Win XP SP3 14. augerin says: great! thanks. 15. LC says: @Lindsay George the unlocker program worked for me, thanks. 16. Chrome says: Good lord, thanks for this. I tried ALL the other things, and this is the only method that did it. In my case, it was a space at the end of too-long filenames.(It worked even though I have SP3!) Perhaps these WERE a deliberate trap… 17. whatwouldmattdo says: I had this annoying die-hard folder with a space (or some invisible non-ANSI space bizzo) at the end.. After trying a hundred things, that “\\?\” thing was just the ticket. Thank you! 18. Eugenia says: Hi, I’m not really sure how you put your solution into the command prompt, because every time I do, it says that windows cannot find ‘del’. Can you type exactly what I’m supposed to type? The file location is: C:\Documents and Settings\Owner\Desktop\Maddie Bailis It’s driving me nuts! 19. Eugenia says: Wow! Never mind! It worked! I just put in the wrong thing. Thanks! 20. Matthias says: Finally! I had been searching all of Microsoft Knowledge Base and the whole web for a solution to this, with nothing but “Contact Microsoft support”. This here worked! Thank you *very* much! Please note that the word “Solved” in the subject header caught my eyes within the millions of search results that state the same question but have no answer. So another thank you for helping me to get here! 21. Thanks. Worked the first time. It was smart thinking to use the word ‘Solved’ in the Header. 22. Keith says: Jason, and all others who may be running XP with SP3, I found a possible solution as well. From the command prompt, navigate via dos commands to the directory containing the offending file/folder. Now type dir /x/a This will show you the old 16 bit 8 character dos name for the file or folder. Now type del 8characterfilename /ah Not sure if the /ah is needed but this worked for me. I hope it helps =] Good luck to you all! 1. ben135 says: Thanks Keith, that worked perfectly. 2. Keksich says: What if the offending folder is located on another drive? (D: in my case, cmd won’t navigate to it) 1. Keksich says: Nevermind, found the solution! Navigate to C:/WINDOWS, then just put d:, so it looks like C:\WINDOWS>d: 3. insan says: Thank you, thank you! 😀 4. shep says: Brilliant! I couldn’t get the original method to work but this worked perfectly first time, thanks 🙂 5. san says: Thanks Keith, an hour of experiment through this article got me happy ending… you made me close this window, with a smile on my face. 6. Joe says: Great, Keith, you are a smart fellow. 7. Paige says: OMG!! Thank you Keith! So glad to be rid of these files! 8. Justin says: Thank you very much Keith…. it work for me…. iwas annoying me for long time now 🙂 9. Thanks, this worked! The IT Guys wanted to delete my entire userprofile to get rid of this file. 10. DZDNCNFZD says: Wow I’ve been trying to delete a 0 byte “system file” with a space attached to the end for forever and this finally did the trick. Thank you! 11. gabriel says: thanks a bunch keith that worked awesome.! 12. Sksyae says: thanks mf, its good 23. vasssos says: hi Im desperate..i’ve tried everything but this thing won’t go..I’m not good in command pro so i can’t figure out what to do.. The folder i’m trying to get rid of has the following name: [snip] With the dot at the end and it’s located in C:\Documents and Settings\Vasssos\My Documents\Downloads Any help will be much appreciated! vasssos 24. Duncan Smart says: @Vassos: at the Command Prompt type: del "\\?\C:\Documents and Settings\Vasssos\My Documents\Downloads\rest-of-your-path-here." Note you can copy and paste text to the Command Prompt also, which will help. 1. ian says: this really worked very well… thanks!!! 2. Wilson says: I have no words… to thank you. My FILE “””Vanished away””” My XP was been SP3. Jason, Try it All the best. 25. vasssos says: hey I tried what you said but i get “The system cannot find the path specified” even though i can see the folder when i type in “dir” 26. Duncan Smart says: @vassos: One thing that may help is to use the TAB key to auto-complete the file path for you. Type the following (note the \\?\ at the start): …then press TAB until your filename appears, when it does, press ENTER. 1. PK says: This Worked for Me !!! Many thanks Duncan. 2. CompulsivePresetCollector says: Phantastic! That worked! What happened was a few months ago I downloaded a bunch of preset files for an onscreen guitar amp that is cross platform. User can upload presets and also download them of course. I noticed two extra files with no file extension in my download folder. Two files with the same name, respectively but with proper file names where also there plus a few hundred proper files. I moved the proper files to their destination and tried to delete the two without file extension. No luck. My guess is that they are uploaded from some Mac user? It just burned my brain to see those MFs in there week after week in my download folder haha! Anyway: the “del \\?\ …” trick worked!!! 😀 Thanx a lot! 3. Hussain says: Many thanks, this worked for me. 27. vasssos says: Still no luck. Note: This “thing” appears to be a folder. I don’t know if it makes any difference on the command for prompt. Thanks again 28. Ben says: @vassos: try using ‘rmdir’ instead of del. 1. in dos, navigate to the directory of the offending folder 2. dir /x/a to get the 8 character name of the offending folder 3. rmdir [your 8 bit filename] 29. vasssos says: @ben Thanks ben I tried that but I get the following: “The directory is not empty” 30. Stickman says: Holy Snap! It worked! Thanks. 31. Anon says: awesome! finally got rid of that thing many thanks! 🙂 32. sam says: thankyou so much! I had no idea how to get rid of a file I had partially downloaded and the cmd line worked perfectly first go 33. eucarya says: can anyone tell me how to prefix my file path? Every time I type the command in DOS I get could not find and I have absolutely no idea how to add the \\?\ prefix to the path. 34. GaGirrl says: @vassos I had the same problem as well. There were folders under the top level “bad” folder that I had to delete first before I could do these steps. 1. So, in Windows, I had: 2. I deleted the “morestuff” and “stuff” folder in Windows just fine, but I couldn’t delete the top level folder (it had a space in it). 3. Once I deleted the subfolders and all that was left was c:\myfiles\badfolder, the suggesions that Ben had worked. 4. I used rd instead of rmdir, but it shouldn’t matter. 35. Johnny says: Hi… I’m trying to delete the following filename “The Gate dvdrip(ironeddie)Divx[1].avi.” Here’s what I’ve done in Command Prompt so far. Directory of C:\Documents and Settings\Desktop\Maps 10/06/2008 04:37 PM . 10/06/2008 04:37 PM ..THEGAT~1 The Gate dvdrip(ironeddie)Divx[1].avi. 09/22/2007 02:13 PM 0 10/05/2008 04:23 PM 21,661,580 Thumbs.db 2 File(s) 21,661,580 bytes 2 Dir(s) 17,562,890,240 bytes free I tried every reccomended method of deleting the file as described here to no avail. I Know I’m getting the file name right because I’m using the tab key. Guess my question is what exactly should I type in from C:\ (at the very beginning) to delete this file with the dot at the end? When I try to preface a path with \\?\ it says the following. CMD does not support UNC paths as current directories. =( 36. Johnny says: Exactly what’s been tried thus far… C:\>del \\?\C:\Documents and Settings\Desktop\Maps\The Gate d vdrip(ironeddie)Divx[1].avi. The system cannot find the file specified. C:\>del \\?\C:\Documents and Settings\Desktop\Maps\”The Gate dvdrip(ironeddie)Divx[1].avi.” The system cannot find the file specified. C:\>del “\\?\C:\Documents and Settings\Desktop\Maps\The Gate d vdrip(ironeddie)Divx[1].avi.” The system cannot find the file specified. 37. Bassastingur says: Thanks man! You’re a lifesaver… 38. eucarya says: 39. Martin says: A brilliantly helpful clue .. that almost worked! My files were hidden system files as well. What did work was replacing the dot with a wildcard in the filename, so this is what worked: del /ahs CAODIZOX? /ahs allows deletion of Hidden System files, and ? substituted for . 1. Thanks, this one from Martin did it: A brilliantly helpful clue .. that almost worked! My files were hidden system files as well. What did work was replacing the dot with a wildcard in the filename, so this is what worked: del /ahs CAODIZOX? /ahs allows deletion of Hidden System files, and ? substituted for . 2. Martian says: XXXXXXXXXXXXXXXXX MARTIN’S tip was the last piece of the puzzle I required in addition to the main blog XXXXXXXXXXXXXXXXX THANKS MARTIN AND DUNCAN (the blogger) 3. kita says: I know this is a really old post but, I have to say thank you to both Duncan and Martin for your tips! Worked like a charm 🙂 1. Veera says: Martin’s idea of ‘?’ worked awesome.. thanks a lot man…. 40. snkcube says: Thank you for the tip! I was able to get rid of these pesky files thanks to you! 41. eucarya says: So not everyone is a DOS guru, how do you prefix a file path? Guys who keep posting saying “wow this worked, thx dewd!!!” how about telling us non-command line savvy folks exactly how you got it to work? Please and thank you! ! ! 42. Duncan Smart says: @eucarya: I’m not sure what you’re having difficulty understanding. Just read the last few paragraphs: “So at a command prompt I was able to delete the file using…” 43. @vassos: try using ‘ren’ before del. 1. in dos, navigate to the directory of the offending folder 2. dir /x/a to get the 8 character name of the offending folder 3. ren [your 8 bit filename] to something like “temp” 4. del temp > yes thats worked fine for me.. Wolverine 44. eucarya says: Got it to work, had to use the “rd” commands as the folders were the ones inherited from my Mac with periods… The problem I was having was that I was forgetting to add the quotation marks to the file path I was trying to remove. So if you can’t get it to work make sure you add quotes around the file path (i.e. del “filepath” or rd “filepath”) Thanks!! 45. Megan says: After several tries I finally got it to work! Thanks 😀 46. Annie says: Thank you sooo much. I could NOT figure out how to delete one of those DOT files. Now I know!!!! Bless you my child. LOL 47. austin says: you win 48. Fred says: Great ! Thanks ! My problem, as others has already said, was not due to a “shittyfile.” but a “shittyfile.xls ” …. A space at the end of the .xls extension drove me crazy for two days. Thank you again for your smart post ! 49. Nicolas says: Now, How can i create a folder that end with period? How Can i rename a folder that contain a period at the end, what is the syntax in command prompt? Thanks 50. Kelly says: im still having trouble with this. someone please help! i have only used the command prompt once prior to this, so i need step by step instructions. I opened the command prompt and it says: C:\Documents and Settings\Kelly Gonrue> The file I want to delete is “Coach case complete ” 51. Janie says: Thanks a lot, it worked a treat on a stubborn folder with a space at the end of its name. XD 52. Andrew says: Thanks! This worked great at first, but I’m still having the same issue with hidden resource fork files that are created by Macs, for example it shows up in Win as “._filename ” Anybody have any luck with these files? I’ve tried using wildcards, /ahs, copy and paste, but can’t get it to delete them. Thanks again for this helpful post! now my mind is at ease.. thanks a lot.. yer a genius..for me. ^^, 54. sherifffruitfly says: A+. 55. Worked like a charm. Thanks for making this public! 🙂 56. Lyxeas says: Hey Martin! Well done the Wildcard at the end was the only way to take it off the Desktop! Good teamwork all! 57. Neelima says: Thanks a lot for this information !! Was of great help to me !! 58. Kasper says: Thank you so much, I was going ballistics over this file, and nothing worked but this! Cheers!! 59. Neil says: Tried this command and it worked on all but 1 file. THe file in question has 2 .. at the end no spaces. I can see it in windows explorer and in the properties it has no hidden attribute but if I do a dir in dos it is unable to see the file. So therefore it won’t delete it doing the above command? Any thoughts? 60. Nish says: I had the same problem, but with spaces in start and the end of the file name. It removed with Kieth’s solution. 61. Dude: I have been unable to delete a file for months. I work in an IT department and even go to school for IT;however, no-one could tell me how to remove the file. You, my friend are awesome; your method worked perfectly. Thank you very much for your assistance. Tacojew22 62. sam says: please help.i need to delete this “C:/Documents and Settings/Korisnik/Desktop/Gipsy Kings Hotel California “.how could i delete it.it has no dot but space at the end.thank you 63. Viktor says: cmd del “\\?\C:\Documents and Settings\Viktor\Desktop\ The Madagascar panguins ” The best solution! (i was googling for it 2 hours) 64. Pat says: Cheers Duncan, I’ve had a file on my desktop for months and tried everything I got by googling but nothing worked – until now 🙂 65. Santiago says: You are a genius! I spent hours trying to delete a crappy file with no success, until I found this method. Thank you so much! 66. Greg says: You rock, thanks for the help with this. All the other forums led me astray 🙂 67. dan says: ive tried it and it worked but the folder is still stuck in the recycle bin and it cannot be emptied 68. testing123 says: An extra tip: if the file is hidden use this command: del /ah “\\?\D:\uit_dienst\pootsm1\Homedir\1\CAH4QDX3” Greets 69. Oneil Williams says: I’m still unable to get rid of my pesky file. It’s called “FW_Ya know you’re a Floridian if..” and it’s on my desktop. So, I tried del \\?\C:Documents and Settings\Desktop\FW_Ya know you’re a Floridian if.. and it’s saying the file couldn’t be identified. Am I doing something wrong? 70. JA says: Thanks a lot! I downloaded a file off one of the many sendspace/rapidshare/mediafire clones and the file ended up corrupt. It has been bugging me for a while now and this really worked. To those who have trouble doing it, you have to make sure that the file is in quotes, that there are no typos at all, and if there is no dot at the end of the file then it probably ends in a space. 71. Hello_World says: thx bro… (damn Mac User..). 😀 at last.. i can get rid of those files.. 72. Fulchand says: The name of the file is: Instant File Search 1.7.5. I did the following;’ Start > Run > CMD> and on command prompt I typed: del”\\?\c:\documents and settings\fulchand shah\local settings\application data\microsoft\ cd burning\instant file search 1.7.5.” I can not delete it. Kindle show me steps I should follow 73. aftab says: I actually tried this and it didn’t work. Then I realized why – it was a folder and I needed to do the same thing with the rmdir command. Thanks for the help. 74. aftab says: I wonder if you could rename a file rather than create one ending with a period. Any help? (*srry im evil) 75. Chola says: I Had the same problem of “Johnny on October 6, 2008 Hi… I’m trying to delete the following filename “The Gate dvdrip(ironeddie)Divx[1].avi.” Here’s what I’ve done in Command Prompt so far. Directory of C:\Documents and Settings\Desktop\Maps 10/06/2008 04:37 PM . 10/06/2008 04:37 PM ..THEGAT~1 The Gate dvdrip(ironeddie)Divx[1].avi. 09/22/2007 02:13 PM 0 10/05/2008 04:23 PM 21,661,580 Thumbs.db 2 File(s) 21,661,580 bytes 2 Dir(s) 17,562,890,240 bytes free I tried every reccomended method of deleting the file as described here to no avail. I Know I’m getting the file name right because I’m using the tab key. Guess my question is what exactly should I type in from C:\ (at the very beginning) to delete this file with the dot at the end? When I try to preface a path with \\?\ it says the following. CMD does not support UNC paths as current directories. =( Exactly what’s been tried thus far… C:\>del \\?\C:\Documents and Settings\Desktop\Maps\The Gate d vdrip(ironeddie)Divx[1].avi. The system cannot find the file specified. C:\>del \\?\C:\Documents and Settings\Desktop\Maps\”The Gate dvdrip(ironeddie)Divx[1].avi.” The system cannot find the file specified. C:\>del “\\?\C:\Documents and Settings\Desktop\Maps\The Gate d vdrip(ironeddie)Divx[1].avi.” The system cannot find the file specified. I resolved it with Sorry for my english, i’m peruvian THANKS A LOT DUNCAN 1. Frances says: Thanks Chola! I could not figure out why del “\\?\C:\Documents and Settings\jsmith\Desktop\CAFUM93N.” was not working. I went through half this list trying every suggestion, then I hit your solution. The /a at the end finally did the trick. YAY! 2. Rahul says: Thank you Chola. The /a at the end worked. Thanks a LOT to Duncan. I have saved this page in my favorites now. 3. Good Grief!!! says: The /a at the end is what finally worked for the hidden file. After 2 1/2 hours of working on this and constantly getting the message: “The system cannnot find the file specified”… this worked! THANK YOU!!!!! 76. Mike Hays says: There is a program that will remove files ending with a “Dot” it is called “unlocker Assistant” this is a freeware program that did delete the Mac files with ease. The link for this file is: http://dailyfreeware.net/2006/09/15/unlocker-183/ Go to the file right click with the mouse and select unlocker it will ask what do you want to do with this file, then pick delete. 77. Stella Volante says: I have a similar problem with deleting invalid files. But my problem isnt solved by the above method. (I can successfully create a *. file by \\?\ and it can be deleted by \\?\ after. But it just doesnt work for my other invalid files.) So for people like me, that cannot delete files by the above method, this is what I used to solve it: http://www.soft411.com/company/Assistance-and-Resources-for-Computing-Inc/DelinvFile.htm Easy to use. 78. Oh my GOD I can’t believe it. I’ve been searching Microsoft’s tech sites, Norton’s sites (my files were NPROTECT files), running CHKDSK over and over, making the files unhidden and trying to delete them from the command prompt and DOS and XP, nothing worked. You have just released 12.7 GB of my disk. I love you. 79. OngKL says: Thank you very much! x 1000 times I managed to use the command you recommended copy \\?\D:\Disk1 to copy a file in an old CD I used a few years ago when my OS is Win95. Under my present OS the file shows in the directory with 0 byte. After the copy, finally it shows some data. Thank you again! 🙂 Ong KL from Singapore 80. Alex says: THANK YOU! THANK YOU! THANK YOU! THANK YOU! This worked wonderfully. 81. attaboy says: End the year on a positive note .. technique works on xp v5.1 sp3 build 2600 (Pro) thanks a zillion. unlocker (noted above)took out the stubborn ones with AS attributes .. these files where found in the nprotect folder. Mac files??? Grrrrrr .. never touch the stuff. Now, if i can just get symantec to stop puking liveupdate files into the nprotect folder every six minutes. Awesome solution .. KUDOS 82. Edom says: Awesome, thanks! Mine doesn’t even end in the dot, but it worked. Finally got rid of(C75_東方) (同人音楽) [dBu music] 廃弾奏結界 鬼譚奏鳴曲 Demon tale sonata (Lame3.96 83. shaun the genius says: if its on your desktop for example do this… in cmd type cd desktop (press enter) type dir (press enter) you should see the file there for example if its called imatwatofafile get the 1st 5 letters in this case it would be imatw type del imatw*.* (press enter) now should be gone 🙂 1. J says: did the trick! thanks 2. Rahul says: Hi All, I have a torrent file ending with space. I tried all methods it does not go. When I do a dir the cmd prompt it does not list the file. thank you! 3. mm says: this worked perfectly! thank you!! (the other suggestions on this blog and correspondence weren’t working for me) 84. myfinalheaven333 says: You just earned 1,000,000,000+ internets from me. I had a stupid torrent file on my desktop forever that I was having this problem with, this did the trick for it 😀 85. Marty says: Many Thanks for posting, this worked for me for a file that had a space at the end of the file name. A brilliant solution that doesn’t require installing a freeware app. del “\\?\C:\Documents and Settings\My Name\Desktop\filename.torrent “ 86. I had that problem, and this works! The file in question ended in a space, with no extension. It was created by DownloadThemAll during a cancelled download. So I guess the OS allowed its creation, but couldn’t delete it. I changed it to adsf.txt, and deleted it, no problems. Great! It helped me to delete A blablabla.torrent file. Windows was giving me the same error. Since I had the file in the c:\temp directory I just went to the command prompt and did a del *.* Problem solved.! Thanks. 88. IamAnerD!!! HA!!! says: THANKS SO MUCH BRO!!! 89. Eric Moran says: This works for deleting files very well, but I cannot get this same method to work while trying to remove directories. 90. Alex says: Actually, I was looking for a way to create folders that cannot be deleted normally by Windows and you helped me out! Just passed md “\\?\c:\AnnoyingFolder.” to the command prompt and there! it works like magic. One other thing… Whenever you create a folder ending with a dot, you can neither access nor delete it. Should you however create a folder whose name ends with a space (” “), you can access it like a regular folder but you cannot delete it. In such folders, any subfolder or file you create is *lost* for good. Windows does not recognise its existence. Plus, as soon as you create any file or folder in a folder whose name ends with a space, the DOS command, RD, or RMDIR does not work on it nor its contents any more. You’ll have to use the program Unlocker. My warning: Do not mess around with folders whose names end with a space or you will be in serious trouble. Thanks man. You’ve been of help. 91. DILLI says: Thank you Thank you Thank you!!! The unlocker program worked perfect. Thanx once again to all of you and your input 92. Bill says: Thank you so much for sharing this solution, Duncan. A worthy contribution to the public good. 93. SB says: I have a Firefox bookmark folder named “Car..”, which when imported into IE8 becomes “Car.”! Stupid Microsoft! The imported bookmark folder is neither accessible or deleteable from within IE or Windows Explorer. Thankfully, your solution works for me! 94. Matt says: OMG THANK YOU SOOOOOOOO MUCH!!!!! 95. Dankwa says: absolutely fantastic. i tried everything to no avail until i found this solution. Thank you very, very much!!! 96. Mark says: Thank you Duncan… Two week problem fixed. 97. ogeday says: unlocker 1.8.7! i’ve been trying to delete a file for two weeks but now it’s gone. thank you unlocker =) 98. Haluk Cengiz says: Thank you Duncan, The same problem happened to me yesterday. I used, Chkdsk, Moveonboot and unlocker but they did not help me. I never thought that this was that easy. Now I tried and it worked. Thanks million times. It is really annoying to have it on your desktop. 99. A1efBet says: THANK YOU! The unlocker worked flawlessly and got rid of a folder that ended with a space. It was driving me crazy for over a month! 100. crinky says: TYVM (grrrr Mac people!) I used rd /s \\?\path and it worked perfectly. Gotta love the Series of Tubes!!! 101. Lynne says: Not working for me: On my Win Server 2000 When I try:D :\data\Scanner\scan>del “ADVISORY SERVICES” I get Tried the unlocker it gets message of Process not locked, “Object cannot be deleted” Grrr… These folders are created when someone scans a doc, saving it to the network. It seems that occasionally certain files just wont delete. Any suggestions? tks 102. BryTech says: SOLVED: UNREMOVABLE DIRECTORY! Ok, so I have a Windows XP machine with a shared folder used by some Mac users. That got me in trouble. Now I have a load of folders I can’t delete, even using the above suggestions. COBIAN BACKUP to the rescue! I use the Cobian Backup utility (free) for other folders and I noticed it successfully removed folders I couldn’t – AND – on the Cobian “Tools” menu, there’s a utility called “Deleter”. You can browse to any folder name on the XP PC, select it, check the box that says “Delete recursively” and away it goes. Those guys RULE! Yeah, maybe installing a backup utility is a little more than you want to do, but it only takes a few minutes and it was soooooo worth the time. FYI, I’m running v.9 of Cobian Backup. Good luck to you all but this worked great for deleting a bunch of Mac-deposited folders!!! 103. shopkin527 says: the command line resolution suggested here did not work for me, but the “unlocker” utility did provide a quick and easy install and fix. thanks! 104. Lynne says: Thanks BryTech! Your solution (after trying many) was the only way I could get these folders to delete! 105. BryTech says: No problem. I couldn’t get the folders removed any other way. Even Unlocker wouldn’t do it for me. 106. Tim says: Awesome solution, thanks!! 107. Jayanth says: Perfect Solution. Thank You. 108. andrew says: Thank you so much. 109. meirav says: try double-click on the file and open it with Notepad. Then, you will see a message: ” Do you want to create a new file? ” click YES. Then, write something in it few words and click on SAVE. Close Notepad and go to the file again , right click on it and delete. Worked for me. Hope it will work for you… 110. meirav says: Actually, it did not work for me at the end so don’t try it. But i downloaded the UNLOCKER and it deleted the files finally. 111. msknyc says: Thanks a lot….was finally able to get rid of a troublesome file that’s been makin’ me nuts for the longest=) 112. Eric says: worked great for me with a file name that ended with a dot. command prompt lists it’s directory like so c:\documents and settings\eric> you must remember to use the command cd\ first eg. c:\documents and settings\eric>cd\ press enter and it will now show C:\ now type the file you want to delete and it will look like this: C:\ del \\?\c:\documents and settings\eric\desktop\annoyingfile. 113. Eric says: example spacing correction Only required space is between del and the \\?\ any other spaces are the actual spaces in the directory and file names. C:\>del \\?\c:\documents and settings\eric\desktop\annoyingfile. 114. ian says: at command line it kept saying it couldn’t find the file, but when I typed (from inside the folder) del awaken~1 it worked great. Use the DOS filename. 115. scorpio_india69 says: Thanks a lot !!! I had two such cases on my machine. One folder and one file. Both were deleted with no problems. Thanks again, and keep the good work going ! 116. Todd says: You are the best! Thanks! 117. TheChaim42 says: Thank you for the command line code! It’s been bothering me for a few weeks and it’s been a VERY long time since I’ve had to use a DOS command line! 118. Lauren says: Thanks so much, the little twerps have been annoying me for so long. 119. Scott says: Thanks for the elegant solution and to those intrepid commenter’s who where able to fill in the chinks! The last time I had a file like this I reformatted to remove it, this is a darn sight easier! 120. Hari says: Thanks for this kool stuff buddy 121. Nice says: YAAAAAAAAAAAAHOOOOOOOOOOOOOO!!!!!!!!! THANK YOU SOOOO MUUUUUUUCH!! YOU ARE A GENIOUS!!!!!!!! YEEESS!!!!!!!! 122. Vekos says: This works wonderfully. For those havin’ trouble typing in the path, click and drag the file name into cmd eg: del “\\?\click and drag the name of your file here and the path will be filled in for you. Use your back arrow on your keyboard and delete the double quotation mark ” that appears before the C: or D: press Enter and yer done. Click and drag the name of the folder into cmd if using rd /s “\\?\ remember to delete the quotation mark, press enter, type y for yes then Enter and voila. 1. Josamae says: oMG.. the best ever… i love it.. It worked for me… Almost 5 months since I”ve tried to delete this annoying file… Now, I’ve found the solution..Thanks a lot.. More power to u… 123. bryan says: Thanks, I waded thru a lot of info and tried several things before I found your solution 124. ota says: THANK YOU SO MUCH!!! At last, after several months, I could delete an annoying file thanks to your precious advice. I am really happy and very greatful. Ciao from Rome 125. Alex says: Thanks a lot, man!!! The del command didn’t work as I’m on WIN XP SP 3 NTFS, but unlocker still helped!!! Many thanks !!! Also for the trick;-) 126. Jason says: Hi there, I had a problem deleting a folder created by a program — I tried the above but this did not work for me (I have ended up with the folder on a number of drives through syncing) the only way I seem to have been able to get rid of the folder is by booting into Ubuntu by CD and using Linux to get rid of it! 127. Pearl says: Hey thanks for the solution! Been looking all over the web for help and you’re the best! 128. Graham says: I just tripped over a super simple (that’s me!) solution. Open the fikle with Unlocker 1.8.7, change the pulldown to delete, and bingo….gone. 129. Mark G says: Thanks. In my case there was very very very very long file name. 130. Myk says: 100% working. Thank you so much.. 🙂 131. Richard G says: Bingo! (problem solved) Much appreciated. Smart stuff! Thank you. 132. LoTek says: Duncan. Well played! Solved an issue with a craptasmo file that threatened my very existence. It has been dealt with and life will resume. Tragedy avoided. I like many others feel the wrath of The MAN often and appreciate simple solutions. Now time to catch the 405. 133. Boatless says: Duncan, These three files have been on my desktop for years. I was spring cleaning this evening and was bugged one more time by the files mere existence. I stumbled on you solution and it’s genius. I too was told by my IT department there was no way to remove the files and evene if they rebuilt my hard drive there was no GRT they would go away. Hats off to you a million times. I am grateful. 134. Duncan, thanks! I’ve had two files stuck in a folder for months that I hadn’t been able to get rid of. 135. Anurag Anil says: I am not able to type the recommended command in the “command prompt” as it shows : I’ve tried 2 use all “delete” programs..but,they didn’t work. The file is on desktop. The annoying filename is : Help needed..DESPERATELY as I luv a clean Desktop. If u want..u can mail me. But,plzzzzzzzzz ASSIST me. 136. john says: Ty for the info… At last i deleted that pesky file 137. Anonymous says: THANK YOU SOOO MUCH!!!!!!!!! 138. Anurag says: I am not able to type the recommended command in the “command prompt” as it shows : I’ve tried 2 use all “delete” programs..but,they didn’t work. The file is on desktop. The annoying filename is : Help needed..DESPERATELY as I luv a clean Desktop. If u want..u can mail me. But,plzzzzzzzzz ASSIST me. 139. Mac says: My file was on my local disk, not a system file and not hidden. It did appear to have a trailing space in the name. None of the above worked directly, but a variation/combination of posts led me to try this, which worked: 1. cd to the directory with the bad file 2. dir /x /a to get the file’s short 8-bit name, say “shortname” 3. del /a shortname No other options on the del command. For example, “del /ahs shortname” did NOT work since the file was not a system or hidden file. I don’t know what the “/a” option is supposed to do without additional specifiers, but it worked for this case. @Anurag, in step 1, to cd to your desktop, just type “cd Desktop” from your “C:\Documents and Settings\username>” prompt. 1. Rahul says: Hi Mac, thank you. This works. Each file is different. Lobe this page. Thank you!! 140. Anurag Anil says: Thanks,Mac for showin’ concern for my woes.. Though,due to ineptness of me..it didn’t work. But,something DID work..like a magic..after goin’ thru d agony of watchin’ that irritating file’s existence on my desktop: http://ccollomb.free.fr/unlocker/ Amazing.. 141. rajkuma says: ooooooooohhhhh my goddddd.. its working……. thanks man.. may god bless you. 142. ggeorgi says: Well done man….. I had this “ghost” and looking for solution a month now….. thanks 143. pae says: thx a lot…. >,< this solution is great 144. Anurag Anil says: But,something DID work..like a magic..after goin’ thru d agony of watchin’ that irritating file’s existence on my desktop: http://ccollomb.free.fr/unlocker/ Amazing.. 145. diego says: YOU ARE GOD! My God, I was having such a terrible time; look am a maniac, ok, I couldn’t live with this damn file that refused to go away, and you had the seemingly non-existing answer. Thank you! 146. J says: Thanks a lot! Worked for me. 147. rcragun says: worked for me!! 148. Brenda says: gee, thanks a lot! 🙂 149. derek says: Thanks Chola, that worked. 150. Shawn says: Again, another big thank you 151. Basu says: Thank you very much man 152. Dave says: Thank you so much – my server just been hacked with 10gb of s**t – this save my life, many thanks man. 153. Christopher says: I was pulling my hair out having tried just about all the standard stuff to remove an annoying directory. This worked like a charm. Thanks!! 154. Duncan you just rock man! I spent the last 30 mins of my life searching for this issue in google and I’ve found only really stupid suggestions and explanations of this little problem. Until I got to you! Thank you so much 🙂 155. ZoRo says: Big thanks 156. Ben says: You are a legend, thanks alot for taking the time and effort to post this. 157. luiz says: thank you. incredible ! 158. da bare says: you da man!!! and the others that helped you come up with this!!! at work they always call on me to fix computer things and now after searching and finding this for my personal computer, i think i’m gonna have some fun at work with creating files and then deleting them! hehe 😉 159. Vašek says: Thank you! I suffered for one year with one file/notfile on my Destop (no possibility for change name, delete or move through normal way) 160. nick says: didnt work so depressed 161. RKD says: Impressive..! Finally I got rid of those stupid files on my desktop.. Thanks VERY much Duncan.. 162. bibie says: wow… thanks a lot duncan..ftr a month, finally i found it. thnks! 163. Hels bells says: Fan-bloomin’-tastic!! Thanks a million, worked a dream… 164. Dave says: Thanks alot for taking the time and effort to post this, tearing my hair out trying to get those damn directories to delete. 165. woooooo says: this just saved my life. thaks 166. Randy says: Thanks!!! worked great 167. Stephaen says: Great! It works also for files with some weird invisible characters (these files came from my Mac at work). 168. Steve says: Thanks for that, I had some files with the syntax [file name] at the start of the filename that apparently, windows didn’t recognize. However your solution beautfully deletes them (and creates them too !!) One thing I would add, is that instead of using two “” at the end of each command line, just leave the command as it is untouched after you paste the offending file name. this worked well for me with a multitude of different names. 169. RP says: Thanks a MILLION. ‘Unlocker Assistant’ worked. File deleted successfully 1. MWR says: Thank you. 170. Dietmar says: Just had a similar issue with a folder containing a space as last character (“FolderName “). Your tip helped also in this case. Thanks 171. Kushagra Udai says: Hi, I’m probably the most unfortunate of you all – I’ve got a folder whose name ends with a backslash. I’ve no idea how it got created – I probably copied it off a pendrive, although I can’t imagine which OS would allow this. 05/17/2009 03:39 PM . 05/17/2009 03:39 PM .. 08/18/2007 04:32 PM defective\ 0 File(s) 0 bytes 3 Dir(s) 11,153,424,384 bytes free Thats the folder – please not hat it was called defective because I had kept some incomplete videos in it – NOT because this was intended. I’ve tried Unlocker, Delinvfile and all the methods in the comments above – absolutely nothing seems to be able to find the file. And Delinv fileshows its short name also as “defective\”. I’ve no idea how that can possibly be because that is 10 Characters. For some reason /x/a does not seem to show short file names when I use the switches with the dir command in cmd. Any Assistance will greatly appreciated. 172. mARk says: Just had a file that ended with a space this worked on the First Try….Thanks 173. Flushing Turd says: Found the solution to the file copy problem. It was a firewall setting of AES-128 encryption that was mucking up the copy. Changed it to a lower encryption. 174. aileverte says: Thank you! The file that got stuck on my desktop was actually a partially downloaded pdf file, and it didn’t have any extension at all… Tried three different things in the command prompt, and it worked when I added a space to the end of the file name! 175. Johnny T says: Thanks! I can get around a puter, but not such a hot shot but this worked easy, way to go bro’! 176. Scott says: Thanks! Got rid of a torrent file I couldn’t delete. 177. piyush says: hi! i hv a whole folder to delete . i tried using ur suggestion but itsnot happening …. 😦 the file names r strange like “*”, “STX” C:\MetaStock Data\april\ pls help 178. Nobody says: UNBELIEVABLE!!! IT WORKS, LIKE A CHARM (TO QUOTE SOME OTHER USERS). THANK YOU SO MUCH FOR YOUR TIP, DUNCAN! 179. Beef says: Hope this can help: I had 4 files ending with a space. They were flagged system and hidden. I couldn’t delete the entire directory because it was my desktop folder! The \\?\ trick doesn’t work, because of the H and S flags. The /a trick works instead! It works because you don’t need to specify any path (and the ANY filename, right or wrong it is, with ending point o ending spaces: it doesn’t matters). The wildcards and the /a will do the rest. del *.* /a (only if it is a simple file) del *.* /ash (in my case) 180. Michael says: So happy… it worked for me. Thanks a lot for sharing it. use a third party software called “unlocker” to delete the file permanently, here is the link: http://ccollomb.free.fr/unlocker/ that unbelievably worked for me after trying all other useless ways. hope it does the same for you 182. Neal says: You smart boy Duncan… Thanks very much. First I typed in command prompt del “c:Documents and Settings\cinkhede\Desktop\Annoying file you can’t delet e.” By using TAB key & then added \\?\ before filename. Worked fine for me. 183. Mika says: Worked beautifully. This is a problem I have been trying to solve on my windows servers for months. Thanks a bunch! 184. David says: The file I am trying to delete from my desktop is “CASJQTMT.” Duncan’s solution didn’t work for me. Unlocker did not work either. Any suggestions? 1. Neal says: Nevigate to desktop through command prompt & try the following: esktop\CASJQTMT.” 185. Thank you! says: wow kickass, this worked! 186. Karthik.India says: Thank you. this works perfectly. 187. Rahamath says: Thanks Dude. U r great 188. Alex says: Thank you very much Duncan! I also inherited a bunch of these files & directories and they were a major eye sore. What a great solution. YOU ROCK!!! 189. Jacob says: Thanks for the heads-up. I had a few of these files that came from…….a MAC. But, not anymore. Thanks again! 190. Alec says: Hi guys! Normally, I am one from those that only read about such problems without contributing. However, this time, I want to share my experience on how to remove these files and deal with this “cannot delete file cannot read from the source file or disk” Simply, I installed it, I right clicked on the files and I deleted them right away! So, the most simple way is this Unlocker, if one wants to get rid of these files, stuck on our desktops! (btw, they were .torrent files) 191. Matt says: Dude I love you, this file randomly showed up from a torrent and I was worried it was a virus or malware sorta file that wouldn’t delete it self. Glad to see it was not, thanks for the help. 192. Tuc says: Worked like a charm, thanks! 193. Jonez says: PRECIATE THAT HELP WITH USING THE UNLOCKER PROGRAM, WORKS LIKE A FUKIN CHARM YA DIG!! 194. loc says: Excellent, it worked! I was getting so pissed off with these Mac folders on my WinXP machine! Thanks!! 195. qwerty says: RMDIR! THANKyou thankyouthankyou! 196. Bgr says: Oh great! been hunting for this! thanks works flawlessly on XP64! 197. bala says: THANKS A TON. I broke my head for several days on this. 198. cecileva says: Thanks for this, worked like a charm for me too. It was also a “buddy file” for a word doc that I received from a correspondant on a mac. 199. Thanks such folder where stuck one of our servers, this really fixed the problem. Since the volume was on a raid 5 i doubted it was a bad sector. Thanks again ! 200. Racan Aboredaif says: i have tried every thing but nothing seems to work. it doesnt seem to recongise the file at all. the size is zero. i keep getting msgs like this when in dos: 1. the file name, directory name or volume label syntax is incorrect 2. couldnt find ….etc 201. Keksich says: My folder I want to delete doesn’t have 8-bit name. Other folders do, but this one has just empty space. 202. Keksich says: Unlocker doesn’t help either. 203. Keksich says: (Yes, I run SP3) 204. Joel says: Thank you so much for the help! I’ve been struggling with a folder for a while. No longer. 205. Keksich says: Solved it! Opened Nero BackItUp ImageTool and deleted it from there ^^ 206. just a note You have done an excellent job sorting this problem another way is go to start hit run type in chkdsk /f and checkdisk acts as the old scandisk instead of windows new checkdisk nb TO ALL THEIR IS A SPACE BETWEEN CHKDSK AND STROKE F ie.,CHKDSK[SPACE]/f alot easier and works i run it regular saves defrags 207. Travis says: Thank you very much for this fix! I was having problems with a folder in Program files. It was called “Mafia” but there was another folder also called “Mafia”. I looked closely once and it turned out to be “Mafia ” (with a space at the end). That seems to cause problems with windows, as I can delete one of those files, but if I try to delete the other (or access it) it says the location on the disk can not be found. But your ‘rd /s “\\?\C:\” fixed it Thank you again 208. jay says: I downloaded a torrent file the other day….Firefox saved the torrent on the desktop. Long story short, I could not delete the torrent files from the desktop. I would get an error message that read, “Cannot read from source disk”. Frustrated,with re-install disks in hand, i decided to research the problem and I stumbled across this post. THANK YOU SO MUCH!!! I went and got the “unlocker” program. NO MORE FILES!! very good software, easy to use, and a handy tool to have if you do a lot of downloading. THANK YOU!! 209. Dear Duncan, I just want to say thanks a million for helping me to solve the problem of a “0 bytes” file that wouldn´t delete. I have been trying for 3 days to find a solution on the internet. I´m dizzy with all the sites that I´ve visited and reading all the similar problems and solutions (that didn´t work in my case). I had accidently created the problem by naming the folder “ONCE UPON A TIME..” (notice the 2 dots at the end of the title). It seemed a good idea at the time, you know, instead of writing the whole title….boy was I wrong. I´m not so computer savvy so I went looking on the net for a solution. I tried FileAssassin, File&Folder Unlocker, MoveonBoot etc, etc, etc. I was getting obsessive about it (I don´t like to give up!!!). Most of the suggested solutions worked if your computer recognises the existance of the cursed folder, but in my case it said the location (folder) didn´t exist, or that it had “no handle”. I tried ChckDisc but it came up clean and the folder wouldn´t budge. I tried Safe Mode, i tried overwriting with another folder (unamed and with the same name)…no joy. It wouldn´t let me move it or change it´´s name. In properties it confirmed “0 bytes”, and I also confirmed it wasn´t being used by any other program. I scanned with anti-virus, Spybot Search & Destroy, and Threat Fire, just to make absolutely sure I hadn´t somehow picked up a rootkit or virus/spyware. All clean. Some of the suggested actions I found on the internet filled my trembling heart with more fear of disaster than is healthy and I didn´t dare try them (“Avenger” for example). Some of them seemed too damned complicated to follow. Then, just 20 minutes ago, I found your explanation, clearly and simply written…and better than your explanation, your solution, also clearly written. I followed your instructions carefully and…..(drum roll)…it´s gone!!!!! So unless it comes back on my next reboot, I´m so happy I could have your children!! (not literally). Just want you to know that I´m really , really grateful and thank you once again. Best Wishes. 210. little_big_man says: Thank you very much! 211. Steve Tomlinson says: Excellent guide. I had a few annoying files accumulaling.When you can’t get a file to behave the way you want it to do it canbriefly take over your life.LOL. thanks for the great work 212. Frank B says: Thank you much!! That worked for a file i had which had a blank space at the end! 213. srini says: open command prompt.then select the in which the file you want to delete is placed.use dir command.use the tab key to select the file u want to delete.then add a word del in front of the file name n press enter.give yes.your file will be deleted 214. srini says: open command prompt.then select drive the in which the file you want to delete is placed.use dir command.use the tab key to select the file u want to delete.then add a word del in front of the file name n press enter.give yes.your file will be deleted 215. srini says: open command prompt.then select the drive in which the file you want to delete is placed.use dir command.use the tab key to select the file u want to delete.then add a word del in front of the file name n press enter.give yes.your file will be deleted 216. noel says: thx for the post – google helped me find you and I had a folder with a space on the end of the filename and the command prompt thing helped. 217. Clay says: PAY ATTENTION TO BACKSLASHES AND QUOTE MARKS. THESE ARE VITAL TO MAKING THIS WORK. BUT IT DOES WORK. I HAD TO TRY FIVE OR TEN TIMES TO GET ALL OF THE BACKSLASHES AND QUOTE MARKS IN THE PROPER PLACE. BUT THEN AGAIN, I AM SLOW. THANKS FOR THE SOLUTION. THIS WAS DRIVING ME CRAZY! 218. greciu says: thanks a lot man… 219. Jake says: Im getting the same error message, but my file doesnt end in a “.” I’m copying a file called “trance around the world.mp3” and pasting it into my phones memory. I know it definatly has enough space, but for some reason it’s refusing to do it 😦 220. kkw says: Unlocker! FTW! 221. Carol says: My only additional suggestion is that you go to Properties to make sure you have the exact file name. The file name I was trying to delete was “Fwd_ FW_ Transplant…..” On the first ten tries, I did not realize there were *spaces* in the file name! If you right click on the file and select “Properties”, you can copy and paste the exact file name from the top box into Word, write your command there, and then copy and paste it into DOS. Worked like a charm once I had the correct file name! 222. Ryan G says: HOLY SMOKIN CRAPPITY CRAP CRAP…lol thank you sooo much..i recently re-installed my o\s just to be rid of a pesky file that didnt finish downloading and remained as an undeletable file in my dl folder…finally figured it out with lots of help from here and a couple links posted here…i can now officially help others with step by step instructions to completely remove those files…one way or another i will help anyone who asks for it , long as i get the emails from this site…thanks to all and ill continue the chain of helping!!! 223. Ryan G says: And no..i wont get u to reinstall windows to fix ur issue when i help haha…i had to delete a second one recently on my new system and got rid of it through the command prompt…i tried everything and finally got it to be gone so dont fear…HELP IS HERE!! 1. Carolin says: I tried the syntax above for the removal of folder to which I receive a response “Are you sure ? and reply Y, only to have a succession of errors that the system cannot find the file specified. This followed by “The filename, directory name, or volume label is incorrect” Is it Operator error? I assume since I receive the positive response, that the syntax is correct. These are all peculiar symbols for file names, that I can’t replicate. Unlocker has not worked. I’ll try MoveON next. 224. i had a music file that canceled during download, its left a part file on my desktop and wouldnt move, i used the command prompt function and it worked perfect, thanks (Y) 225. Ritesh Bisu Mishi says: Thanks a lot. At last i could delete this irritating file on my desktop and it saved me from reformatting my system over again. Nice suggestion 226. shashi says: thanks a lot.. totally awesome.. i had an ugly file for years and the only option seemed like formatting the disk.. thanks a for a making my day… 227. Ravi says: Thanks a ton for posting this article. I am indebted to you. I lost sleep for 1 day due to this “cannot read from the source file or disk” error. The suprising thing is, this dot named folder was created by Microsoft webcast website when I tried to download its presentation in live meeting format. You are smarter than the MS guys 🙂 228. spatial says: I had a Safari shortcut that had that “cannot read …”. I was trying to delete it via the method at the top of the screen but it wasn’t appearing in the cmd window. To solve this I changed the attributes of the file to “Hidden” then hit F5 to refresh the page and all of a sudden I could delete it. Weird, I’m blaming apple! 229. savan says: hey when i try to paste the bad file name in command prompt it gets paste and ia am getting continous beep from my cpu and when i press enter it returns to the normal prompt but the file wont deletes again 1. Carolin says: I have had success finally, with Error-checking on disk properties. (Select Drive, right click> Properties> Tools>, select both Check disk options > Start. My bogus files were on external drive, and this worked simply and beautifully, since I had been unable to reproduce file / folder names (hieroglyphics) from CMD line. Folders and files are gone, and able to delete folder from drive. yippee 230. savan says: thank u finally i deleted them using del *.*\a command i lost other files with them but atleast i got rid of those files thank u 231. mentor says: Thank you very much. Finally simple and elegant way to clean the annoying files. 232. Dave says: Thanks! I had a folder with a space at the end… 233. watwat says: THANKS. I had a folder with space at the end in my steam folder and it drove me nuts. 234. Sapa2ler says: THANKS. Finally, I can delete that annoying file on my office pc desktop…. 235. Pramod says: Err! in my case D:\websites\Hub>rd imagegallery Access is denied. D:\websites\Hub>rd /s “\\?\d:\websites\hub\imagegallery” \\?\d:\websites\hub\imagegallery, Are you sure (Y/N)? y Access is denied. 236. joeboxertbf says: thanks guys this method really worked and is very effective I did it in command prompt to take out a self made folder from program files 237. Nick Ng says: Thank you very much… My problem finally solved by using “Unlocker” program, since the Japanese character cannot paste completely into command prompt, the problem of “The system cannot find the file specified.” continue exist.. 238. PI1960 says: In my external hard drive “G” I found a lot of files with symbols and future dates reading 12/16/2036. All the files are empty and look like the following: 9hÜ{ë.m╝ I cannot delete them keep getting error code: Cannot delete file: cannot read from the source file or disk. There are a couple of hundred of these files in my external hard drive. How can I remove them? I tried the “Unlocker” program but it says their not locked and when I delete them they’re still there. I would apppreciate if you can help me. 239. Eli says: Thanks a lot! It really workes! Finally I succeeded to delete these annoying files. 240. Wow, how silly of windows. Thank you so much for helping me fix this!. When I put my code into the command prompt (Start > Run)” Then click Ok, a browser spawns and tries to open http://del/ which of course can’t be found. I just can’t seem to execute a dos command from the command prompt… Am I being a dumbass, or there something else? Ok, I’m a schmo, but, I still put it into the actual command prompt window (Start -> Run -> CMD (enter)) — I paste it in there: And it says: istrator\Desktop\CA6FKTS5.” Could Not Find \\?\C:\Documents and Settings\Administrator\Desktop\CA6FKTS5. Any ideas? 243. sean says: Very neat trick 🙂 Thanks indeed for the information! 244. Manzoor says: Thankkkkk you very much…. It worked wonders for me…the command worked and I was able to delete a file which was showing hell…thanks for your expert advice and I would give you a hug if you were beside me…Thanks again…. Manzoor 245. Steven Gardiner says: That worked great–and the instructions were clear enough that even a command line phobic person such as myself could do it. Thanks! 246. rutvi says: thanks a lot ….. 🙂 247. Greg says: Thank you very much, worked like a charm! 248. Lloyd says: You nailed the problem!!! My file ending in “.” was within several nested directories. The “\\?\” approach didn’t work… and I tried many combinations. HOWEVER, just using rmdir /s worked perfectly (just naming the top directory)! THANKS AGAIN! 249. Sam says: Hey, so I looked over this post for quite a while and my file was not going away. So this is what I came up with: Every time I would type in the del “\\?\E:… it would say that the file was unable to be located (even when I would copy and paste the extension directory, so I knew I had it right). I even would tab through the entire directory so as to be extremely accurate. Nothing seemed to work. By the way, this file was on a flash/thumb drive and was formerly/caused by a corrupt .docx file (a.k.a. ms word 2007). So I ended up using the “rd /s “\\?\” command and deleting the entire folder it was in. Well, it worked. Here is what it looked like when I typed it into command prompt: rd /s “\\?\E:\—school—\—-Fall 2009—-\Marketing (sales) 3200\Homework pressed enter, and…. good by folder! the corrupt file was reading 1.2gb on a 4gb flash drive. How STUPID! But now it is gone and am glad I ran into this post. Also glad to find there are others with similar problems and im not just an idiot. Hope this helps 250. Mike says: I’ve never left a comment on a blog before. I generally try to stay away from them because of the risky advice and, let’s just say, poor attitudes often encountered. However, I’ve been fighting a plethora of computer issues for over a week now. The latest was a corrupted Recycle Bin where I found I couldn’t delete 321Mb of Symantec files. I’m so grateful for stumbling across this link that I’m breaking my rule (“More of a guideline than a rule, actually.”) and saying, “Thank you.” I don’t know how many times I, too, have struggled with this issue. Also, allow me to say that, although I didn’t read all the posts, I found the inputs, attitudes, and egos to be wonderfully different from those other blogs to which I referred. Thanks, again. 251. Tim says: Duncan, you are a god among men. Thank you for this post, it fixed a suuuuper annoying problem I had. 252. Stella says: Thank you so much. I’ve had this annoying file on my desktop for MONTHS! Now it’s gone. Hurrah! 253. Paulo says: Thanks, it worked. It should be noted tough, that even in DOS prompt, I got the message “the system can’t find the specified folder… blah, blah, blah…”. When I looked again where it was, it had simply vanished. Let me tell you, every passing day I think more and more that computing is modern day magic. 254. Rui says: When finish install one new option in your windows right button of the mouse will be showed. click whit right button of the mouse in the file that you need and select the action. This program is perfect and very easy to use. tanks Rui from Brazil. sorry by the my worst English 255. Afshin says: thanks so much! I also had a few files which I had downloaded and they had a space at the end of extension, so like they were 4-char extensions, and I could not get rid of them. removing them from the DOS prompt worked at the very first time! I was lazy so I used the “del” command using the confirmation mode (/p) and confirmed “y” when it came to those files. Thanks again! I am happy tonight. 256. Hey thanks alot … This thing was bugging me for the last 3 days, atlast! Thanks alot.. 257. ETM says: What if I have over 600 .DS_Store files on a server, in many different folders that I want to delete. Is there an easy way or will I have to go to each folder individually to delete them? 258. I had some folders that ended in spaces and question marks. The “rd” command line deleted them with no trouble! And the tab-until-your-file name-appears trick sure was helpful when I couldn’t tell exactly what trailing character I was dealing with. Thanks! 259. JMPD says: THANK YOU SO MUCH! 260. Mahdi says: THank you, you helped me so much. 261. Darlene says: This worked like a charm! Thank you SO much! 262. jryprt says: rd /s “\\?\C:\Documents and Settings\User\Desktop\Annoying Folder.” Worked after I changed User to my user name. Thanks been looking for a fix for over a year. Just wanted to say Thanks I downloaded unlocker and it worked Thanks for the info.. All The best 264. Ibrahim Shah Khan says: Go to this site: http://ccollomb.free.fr/unlocker/ At last i got rid of this Useless file….Unlocker helped me it is the best!!! 265. Zelta says: Great thanks a lot!! 266. Lars says: It helped me a lot. Thank you! 267. This application, unlocker, really helped. Thanks a bunch! 268. Dennis H says: Try a free program called “Unlocker.” It is not spyware or filled with any virus junk. Great for solving many issues when Windows will not let you control a file. Allows you to release Windows control with useful features and ease. http://ccollomb.free.fr/unlocker/ 269. pascolodoro says: hi 🙂 cheers 🙂 270. JK says: OMG!!! It worked!! Thank you!!! 271. Sharon says: It worked – thanks so much for relieving that headache! 272. tbone says: Thank you kindly. This was a rather annoying issue for me. 273. worked perfect, glad i found this post quickly. 274. glen says: What a hero! I couldn’t log off my pc nor log onto another because of this stupid file. I really would have a beer with duncan. Total legend 275. loran says: Thank YOU!!!!!!!!!! 276. Jackhammer says: Thanks for the \\?\ switch… I have been trying to delete a folder for months and would always get the cannot read from the source file or disk BS. This worked great man. 277. Linux says: It doesnot work it says The filename, directory name or volume lable syntax is incorrect 278. Pragya says: Thanks Dude!!! You are a genius…….. I been trying it out for several weeks to delete 2 such folders on my laptop but all in vain.Then i searched it on google and was directed to this blog. It was so simple that i was amazed….. All my headache was gone so easy that i jumped out of my bed. Previously i thought that it was a virus or something like that and kept on scanning with all kinds of virus removal tools and the trick was just so simple…. Hats off to you…..!!! Thanks a bunch….that’s the solution i’m looking for. ,,V,^_^,V,, 280. Peter says: Thank you so much! I appreciate it! this has been bugging me for a long time now. 281. Eleni says: Thanks a million!!! Had a file that was driving me crazy and now it’s gone!!! Respect! 282. Christos says: Very interesting insight into NTFS and Win32 API. However the \\?\ did not work for me. The folders I couldn’t delete had orginated after copying the Favorites folders to my USB drive. I have had this problem many times and have usually resolved by copying all files off and reformatting the drive! The symptoms are: in Explorer or DOS attempts to delete a folder tree \Favorites\e result in the error at the beginning of this post. Folder properties show no security or sharing tabs on either folder. However I resolved it by using DIR /X /S which showed my folder structure was actually “\favor~2\e0256~1”. Using RD with these names allowed me to delete the files. Using \\?\… did not work in this case. Many thanks Duncan for focussing me on what the real cause of my problem could be! 283. Thanks man. I had 2 files I could not delete bcuz they ended with ‘file…..” I had made a backup of my website and a form processor creates a file of the output with no extension that I can read in notepad from FTP. But once I backuped up my site on my computer, and then tried to backup my computer it would stop at these files and would not go any farther. Windows XP would not let me delete or rename the file. Your post worked perfectly to get rid of those files. Again thanks 284. Question says: Damn. People seem to get this problem but most people are ignoring it here. And I’ve tried all the ones that have actually tried but to no avail. I have tried everything but nothing works. The file is on my USB and I’ve tried the unlocker and all the \\?\,/s, commands but it still doesn’t work. I just get the same error, Could Not Find blah or when I try to delete the folder: The system cannot find the file specified or filename, directory name, syntax is incorrect. I’ve tried disk checking and I also typed in /a, not sure how that works though. The problem doesn’t seem to be the filenames. At least I’m not sure. One of the files is called IrSPXO~1.PRX and I’ve tried naming another file the exact same thing and I was able to delete it. So what’s the problem here? 1. Wendy B says: I am eagerly awaiting a solution to your problem as well! I’ve done everything listed here, including that accursed Unlocker link–it did nothing but add MORE crap to my computer (thank GOD I was able to get rid of it), never mind coming up in a foreign language! I have this evil file on my MP3 player, and no matter what I try, it won’t go away! Someone HELP!! 1. Duncan Smart says: If the offending file is on a USB drive then just copy off the files you want to keep and then (right-click) format the USB drive, then copy the files back. 285. Lud Betoven says: Don’t know how to thank you man. I had couple of files on my Desktop ending with a dot and occupied my work space for months. Thanks a bunch! 286. iambrianrice says: Oh snap! I’ve been living with a file on my desktop for months. Rather than have a . at the end of the file name, there was a space that I had never noticed. Thanks for the direction and the relief. 287. Random Man says: I had this problem only I made the mistake if trying to rename so every time I tried to click off it was like OMG NO NAMING THIS FILE and since I had made no note of the name of the file first, I was stuck because every time I clicked off an error popped up. Fixed that by restarting explorer.exe. 288. Brenda says: I am having trouble with this fix: del “\\?\C:\Documents and Settings\Brenda\Desktop\Fwd_ Health tip of the day……” but I was unsure of how to open a cmd window and when I did finally open one it looked like this: C:\Documents and Settings\Brenda> Is this correct? Do I just paste the fix in like this? C:\Documents and Settings\Brenda>del “\\?\C:\Documents and Settings\Brenda\Desktop\Fwd_ Health tip of the day……” I get the following error when I paste that into the cmd window. The filename, directory name, or volume label syntax is incorrect. When I go to this website: http://ccollomb.free.fr/unlocker/ it sets off all kinds of warnings on my computer so I was afraid to proceed. These are the errors that I have received: This website has been reported as unsafe. This website has been reported to Microsoft for containing threats to your computer that might reveal personal or financial information. Can someone help? 289. wim says: Thanks! After downloading some louche file I ended up with 4 such undeletable files. They were just standing there, doing nothing, anoying me, and I couldn’t do a damn thing about it. 290. Brenda says: Well I am a little slow with things but once I used the tab it finaly worked. Thank you, thank you, THANK YOU. I have XP and SP3. 291. D says: Hi there. I’m a dummy at all of this but I have to get rid of some icons on my desktop that give me that error message when i try to delete or rename them. Where do I type in the “\\?\” ? Like I type that before the filename right but how do I get to where I can do that? Can someone please walk through it like they’re trying to show their grandma? 292. D says: Oh my stars. I’m terribly excited! Someone up there posted a link to a program called Unlocker and it’s deleting those icons just like that. I must go thank him/her! I did manage to find out how to open the command prompt window from start>all programs>accessories but alas when i entered the command it told me the “The System Cannot Find the Path Specified” But at least Grandma is learning a thing or two. And those nasty icons are going just like that. Now my husband won’t see a thing! 293. Ned Edillor says: Wow! thank you so much! Finally! i got rid of that stupid annoying file! THANKS AGAIN! Salamat. ~Ned 294. Matt says: Thank you Duncan. \\?\ is the trick I was looking for. 295. Thats great Duncan… it is working for me using Win XP SP 3.. Thanks a lot.. LOL 296. Daron says: Great post. deleted the dot file that was bothering me for a while now. 297. nightrider says: THANKS! Great tip. 298. stefan says: had a similar problem. //?/ didn’t work, still same error, so I tried del C:\….\temp\*.* . Problem gone 🙂 299. Jenny says: You rock! I was tasked to delete old home directories that data security could not and these old favorites were kicking my butt. Not anymore!!! Thanks 300. Francis says: OMG! THAT WORKED SPLENDIDLY!!! THANK YOU SO MUCH!!!!! 301. Sammy says: Duncan Unbelievable OH MY GOD Thank you. Very much. Thanks, it worked. 302. Vy says: Brilliant job. My OCD-like tendencies would not let me sleep until I found a way to delete these pesky files that I had just received via email and would not delete. At last I can go to bed. Thanks very much. 303. Mikuz says: Did not work! On explorer it says “Cannot read from the source file or disk” On cmd, well are some weard characters so I made pic: Someone pluggen my usb stick off when I was copying files 😡 Help? Anyone? 304. Thanks so much for this. My problems was mac files created on a terastation. Between this little hack and that program unlocker that got rid of a problem that has been around for 3 years for me. ROCKIN! PS: My annoying files had extra blank spaces at the end of the file name. 305. tom says: hi, thanks for that tip. very useful. regards tom 306. Chris says: Thank you ever so much. Beer forthcoming separately. 1. Duncan Smart says: 🙂 307. chris says: wow i will admit that i was drinking quite a bit but for a minute i thought i was being hacked. i could not get rid of these two files that were .jpg that i got in an email. thank you duncan your command worked perfectly! oh joy! the un-deletable files are gone. 308. thankful1 says: man, you’re a genius, thank you very much for your help, keep up the good work, and again thank you 309. Ralle-USA says: This solution also works with spaces after file name extensions. Like “annoying_file.jpg “. I just couldn’t delete that file the normal way either. Always received the “Cannot read from the source file or disk” error message. Thanks man!!! 310. Mick says: Tried everything you suggested, but with no luck… and then I found the link to Unlocker and the darned thing is gone! Three days messing with an empty file that was driving me crazy… thanks for all the help in this post. 311. Jatin says: I have tried this “ren \\?\e:\win xp tb.bit10.office2007.drivers. 1234.gho” where but the error shown is where 1234.gho is the name which i want to rename it to, the result of the commmand is “The syntax of the Command is incorrect” Actually the file i have is a 3.6 gb Norton Ghost file which ends with “.” i don’t wanna delete this file but only want to rename it. 312. Jatin says: I have tried renaming the file by “ren \\?\e:\win xp tb.bit10.office2007.drivers. 1234.gho” the result is “The syntax of the Command is incorrect” The file i have is a 3.6 gb Norton Ghost file which ends with “.” i don’t wanna delete this file but only want to rename it. 1. Duncan Smart says: You need to put quotes around the first filename because it has a space in it. 1. Duncan Smart says: e.g. ren “\\?\e:\win xp tb.bit10.office2007.drivers.” 1234.gho 2. Jatin says: Thank you man………you are a Genius…………….. thnkxxxxxxxx a lot!!!!!!!!!! 313. Zoran says: You are a f***ing genius!!! THANK YOU MAN!!! 314. Alex says: [let’s try again] Hi, I am experiencing this same error in MOSS but when I try to rename a file through the explorer view of a document library. To let you know, the file has a ‘+’ character, was brought into MOSS through drag and drop in the explorer window and is also part way through a workflow. Not knowing the origins of the document can I assume that this is occuring due to the same reason? Thanks, Alex. 1. Duncan Smart says: If it’s in SharePoint then ultimately it’s stored in SQL Server so I doubt if the “\\?\” will work as this is an NTFS thing. I’d dig into the underlying SQL tables of your SharePoint site to see if you could fix it there. 315. David says: I finally did it. I had no idea what ‘Command Prompt’ was. Haha. Thank you SO much. You’re a genius. 🙂 316. Thing says: Like everyone before me, THANK YOU! 317. Santiago says: This is great. I had this page bookmarked and I knew I was gonna need it again. I had to look through the comments again to realize my annoying file had a SPACE at the end and not a dot. But once I realized that, I was golden. So to all of you out there that may not be getting this done with the dot, try a space. Thanks again! 318. dude says: hey can some one e-mail me one of those un-deletable files, i tried to make one but i cant, and i can’t find one to download 😦 … I JUST WANT TO LOOK AT ONE. thanks. —————– dude123654789@yahoo.com ———————- 319. Katya says: bacm\\ folder will not delete on my desktop “owner\desktop” it was created outside of windows xp (fat fingered it) cannot delete it in safe mode can see the directory in cmd prompt under “all users\desktop” when i try to open the folder i get: ….all users\desktop\bacm\ refers to a location that is unavailable…. try to rd it per your instructions and get “… syntax is incorrect” 320. hac says: Thanks a million for the tip. You really saved my life. In my case it was not a dot but a space at the end of a folder, which I couldn’t rename or even delete, and it affected the functioning of a server-based application. 321. Jono says: Thank you so much Duncan! This little file of mine has been bugging me for months!! arr I looked everywhere for a solution and it still didn’t work, I tried the whole ending explorer process way and even downloaded some applications to do the job, but nothing Same as hac, it was the space at the end, therefore i was typing it wrongly into cmd and to add to my frustration it was a really long name arr you can imagine!! anyway thank you so much once more 322. Well Duncan, Not just you saved those hundreds of people above me from an annoying file but you saved mine + teaching me a new thing in the process. The file have been bugging me for months, I tried cmd to rm dir, I tried showing hidden file, I did all I knew till I suspected it as a virus, but thankfully, it turns out that it’s not a virus but file naming confusion? Whoever you are, wherever you are, thank you Duncan. Vielen Dank, tienzyee 323. AJ says: Okay, so it found the folder but it says “Are You Sure ?” and i’ve tried putting Yes, yes, Y, but NOTHING WORKS. afterwards, it just says “cannot find specified file” or sometihng like that. 324. Yogi says: Thanks pal !!! it really helped me to put those annoying files out of my desktop 325. Yogi says: and one more thing.. the problem arises not only with files and folders ending with a “.”(dot) it also comes with names ending with a ” “(space).. thanks again 326. This originally got posted over a year ago, and you’re still helping people on almost a daily basis… that’s pretty amazing! What’s also pretty amazing is that Windows doesn’t support the full range of NTFS, considering MS developed the file system in the first place. Well done Duncan, beer is on the way! 1. Duncan Smart says: Thanks Greg 🙂 327. Julie says: I have what looks like 243 annoying files on my external hard drive that appeared out of no where and wont let anti virus and malware scans through. Clearly something happened, and they have become corrupted. I read your tip with very high hopes and I will be VERY happy to buy you a beer if you have a further tip for me since when I run from the command line this is the response I get: \\?\F:\Movies\Pi\måƒ25Ñ╔Ω. º⌠ – The system cannot find the file specified. \\?\F:\Movies\Pi\~-1♫┘╡½è.╥♠z – The filename, directory name, or volume label syntax is incorrect. I’m on Windows XP with SP3 Julie 1. Duncan Smart says: Julie, try the short filename trick mentioned by Keith on September 20, 2008. Failing that, it looks like the drive filesystem is corrupt. I would copy all the stuff you want to keep off it to another drive and reformat the bad drive (right click it and choose Format…), and then copy the stuff back. HTH, Duncan 328. Jack says: Wonderful – have been looking for months to get rid of a directory (empty). I finally discovered it had a rogue character at the end (not a .) but which had the same effect. The ? trick finally did the trick. By the way, I suspect the rogue character was inserted by a Nokia mobile phone! 329. Saeed says: Hi, Just stumbled across this and wanted to thank you for the great tip. Not only have I deleted two annoying locked files, but – due to the fact that these files came from a couple of large corrupt .rar archives – they were hogging around 20Gb of disk space, which I have now reclaimed!!! Thanks again! 330. patricia says: hello all . I tried the command but nothing worked ó’a.σí” Could Not Find \\?\F:\Music\album ▐▲╨ó’a.σí ‘?’ is not recognized as an internal or external command, operable program or batch file. \\?\F:\Music\album, Are you sure (Y/N)? y \\?\F:\Music\album\ù Yh3S⌂τ._ßπ – The system cannot find the file specified. \\?\F:\Music\album\ó2▌╒m0ä┬.ï≤¶ – The filename, directory name, or volume label syntax is incorrect. 1. Duncan Smart says: Patricia, try the short filename trick mentioned by Keith on September 20, 2008. 331. Nathan says: I have the same problem with the error message, however its on my usb flash drive…. 😐 332. Tamie says: Urgh! Not working. Tried to delete the files via Windows Explorer. Got the “cannot read…” message. Tried to rename files. Same message. Cannot move, cannot copy, cannot do anything. Tried the solution given by the author of this page. The files cannot be found. So I went and moved all files out of that folder EXCEPT those to be deleted, and then attempted del *.* in the command prompt. Tried to go up one level and just delete that folder; same problem. Attributes didn’t seem to make a difference, whether in WinExplorer or the Command Prompt. (By the way, there are 12 files I need to get rid of, at least seven of which have the faded icon that indicates hidden or system files; but even with hidden or system files showing, Command Prompt only showed 6 total.) Tried Unlocker. It said it deleted the files; they’re still in there. Tried the deleter from Cobian Backup utility…. I don’t remember the message I got, but those files are still there. Tried opening in Notepad and saving as a new, deletable file. Syntax not correct. What else am I missing? About the only solution I can come up with at this point is to backup everything I WANT to keep and reformat the disk (it’s a flash drive, so it’s not as big a deal as it would be if it were the computer, but still….) 1. Duncan Smart says: Tamie, try the short filename trick mentioned by Keith on September 20, 2008. 333. Carlos says: Thank you, Thank you, Thank you. Had a dozen jpg files that my Father downloaded on his desktop that I could not get rid of till I read this article. Took awhile but that’s cause I had not used a command prompt in years and forgot to put a space after del 334. sunil says: this is cool..it worked and i was able to delete files from my desktop 335. Ren says: nice ty paps 336. Brian says: o m g thankyousomuch Was able to delete 3 annoying files from my desktop. Hooray for the interweb and the intelligent people on it. 337. Debbie says: Thank you sooo much for posting this! I was about to try reinstalling Windows to get rid of those files on my desktop after having tried so many other things to delete them! 338. Jimmy says: This guy deserves some kinda award-Thank You! Seriously right now im imagining all the people that have used this to help solve their problem and the combined relief- how do you measure relief, in tons right!? The best part, typical of computers the solution is more simple than the problem and right under the nose. I mean the command prompt, who is gonna discover this problem of “undeletable” files and think right away, i could definitely fix this with cmd. It wouldnt have mattered much, i have basic understanding of dos (havent messed with in years) and even less of cmd but i wouldve never figured out the prefixing for this on my own. Again, Thank You! 339. Cossu says: i have a problem with a fil called êV4  from my MicroSD and i can’t seem to be able to write êV4  in my Command Prompt… any ideas? 340. Thanks a lot Duncan. Couldn’t do it for around a month… Great Stuff 🙂 341. Iggie mann says: Thank you for this tip! BEST TIP EVER!!! Been stuck with some legacy files forever and this was the only way to squash it! Thanks! 342. Hi, Everybody .. thanks for your posts… 343. Here is an easy way for deleting files that don’t have an extension or end with … or has a space after the period. Take your desktop and copy all the files except the one you want to delete to a new folder. Then go into the c:\ prompt, and go to your desktop C:\documents and settings\owner\desktop Then just find the file you want to delete. Make sure there are no other files in the folder and then type C:\del X.* X= the first letter of the file e.g. if the file is: Superman-flies… then type C:\del S.* Or of course if you are sure there are no other files then you can type del *.* Then go to the temp folder you created and move all your files back to your desktop, and then delete the temp folder. It worked for me! 344. John says: Great hint!!! 345. chris says: Thanks guys so much. I have had a damn file for over 2 yrs and unable to delete. The DOS method did not work, but unlocker did the job. Thanks again. 346. goran says: Thank you so much. Cheers 🙂 347. Larry says: “Error Deleting File or Folder – Cannot delete file: Cannot read from the source file or disk“. I get this error when trying to delete files from a flash drive that is on a GPS the files have names like (+si+_5-.C(+ I have try cleaning software but it can not access the flash drive it dose not have a drive letter but looks like a folder with the name My flash disk. I have tried to move the files, to rename them but no success. Thanks for any help I get…. 1. Duncan Smart says: Copy any data you want to keep off it then reformat it (assuming you don’t want the misnamed files)… 348. tahir says: good way to get rid of annoying folder. thanks. 349. Karen says: Thanks dude!!! Worked perfectly. 350. officejedi says: I’m trying to move files from one share to another ..and a bunch with the spaces and .’s in the file names will not copy or move … do not want to delete , as we need these files, but what would work to just move them to another file share? thanks 351. sms says: Thank you very much… 352. Anonymous62265 says: Thank you. Worked first try. Was driving me crazy til I found your fix. 353. Hey Author of this awesome blog ! This solution worked great ! Thanks alot ! 354. troy says: thanks bro.. it works 355. zulu says: thanks all internet legends.. got rid of annoying file with this fix. 356. teil says: Hey, THANKS a million! Got the file from an email attachment, which was indeed generated by Mac. Didn’t know what’s wrong with the file, didn’t know what to do with the file. Now, clean world returned after following your proc! Happy New Year! 357. Aleksandar says: This is great,BUT there is easyer way to do that,just download and install unlocer(to download it search on google unlocker,and download it from filehippo.com)after doing that,right click on the folder,or file that you want to delete,and click on “unlocker” after that select action “delete” click “ok” and BAM youre done 🙂 358. premo says: You have just freed me from a 3 year burden I thank you kind sage. Unlocker has done nothing for me. Nothing I tell you 359. mariachipr says: Thanks, you’re a genius. 360. martin says: thanks very much, you helped me on this issue, clear and concise, 10/10 have a good day 361. Midge says: I know this is an old subject, but I just ran into this problem today with a file that I had that ended in a “.” I tried everything I could think of to get rid of this stupid thing…but to no avail. I want to thank you, Duncan…your solution worked like a charm. I am so glad there are smart people like you out there to help “techno-challenged” people like me! Thanks again! 362. oesoep tjhai says: Great! It worked . Thanks!! 363. David says: Great Solution – been bugging me and today stopped a data migration project in its tracks. Thanks, -unlocker is for a totally different set of circumstances Alex 364. Leddo says: Yeah same here, old prob, but still a very current solution. I came across a situation where I had spaces at the end of the folder name on a win 2008 server. I had to do rd \\?\”fullpath to directory” note the \\?\ was outside of the ” ” bit. Thanks again. 1. Duncan Smart says: Interesting! 365. me says: you are my hero of the day! 366. JR says: Thks for the UNLOCKER site…..works like a charm….months unable to get rid of file on my desktop…..it is now gone….this is a great place…..lots of great great info. once again thks. 367. David says: Yeah, it’s not working for me. I put the exact path into the command prompt and it keeps saying it cannot find the path specified. Hate my computer. I’m setting it on fire when I get a new one. 368. Shana says: it worked for me . . . i struggled for 2 damn years – thx 4 the heads up 369. Thank you for this. That file has haunted me for weeks. I’m afraid I can’t afford to buy you a beer but I just wanted to know how much you are appreciated. 370. Wendy says: You are an angel, I have been trying to remove some annoying folders for over 6 months now and thanx to you I now got it right. 371. Russell says: Thank you so much! This solution is fantastic for anyone using Dropbox across Apple OS and Windows. How has an explanation this clear not made it into my search queries for the past year? I don’t know what a beer costs in Abingdon, but I made an estimate. Cheers. 1. Duncan Smart says: Thanks Russell – much appreciated 🙂 372. Anex says: Scanned the Program Unlocker with multiple netbot/malware scans. Nothing found and worked like a charm, of course, not making the same mistake I uninstalled it after use. Was trying to delete a remote manipulation of two ports from a trojan virus called ‘Owner.’ Couldn’t use command prompt to take care of it, no matter how long I tried, but it did provide more information about the commandprompt which is always usefull, thank you. 373. silvr says: [Torrentsworld.net] – Harry Potter And The Half Blood Prince DVDScr XviD-PUKKA.torrent above is the file that i am unable to delete from months…. its of 0 bytes….. the msg comes as can’t delete the file…can’t read from the source or disc… what should i do… plz help… 374. silvr says: plz help 375. hi all, i tried deleting a file named “CAMJIRMP.” i am not able to delete it since its showing “cannot delete file: cannot read file from disk” help me…. 376. Navin says: Thanks a lot!! Was breaking my head for almost a day… It worked exactly the way it said. Thanks Again!! 377. Thank you This has been troubling me for 6 months 378. to Arun.A as our genious friend here who owe many thanks to described the process. You need to CD (change directory) to reach there, and check if it is a FILE or a FOLDER Trust me it works 379. cmcculloh says: Amazing. Thank you so much for posting this! 380. Michael says: This worked great, and on XP sp3. I have been trying to figure out how to get rid of 8 Gig of recycle data that Norton left for unerase function, that it did not remove itself, files that were 5 years old. Dropped a donation in your beer fund, it was worth getting room back on my C drive. 1. Duncan Smart says: Much appreciated Michael, thanks! 381. Lenny Gray says: Another wrinkle which was necessary for me to get rid of mine: Mine was an evil directory, with contents, on XP, and didn’t respond to the method suggested. Eventually it occurred to me to isolate it in its own directory and use /s on that parent directory. I first looked at the contents with: dir /s “\\?\d:\o\parent” and then rmdir’ed it with: rmdir /s “\\?\d:\o\parent” 382. matt says: thank you very much – I had file names ending with a space and this got rid of them. 383. I am terribly sorry to have to leave you feedback, instead of some bread, alas. This is an excellent, excellent, totally efficacious technique. I would strongly reemphasize the importance of the tip listed directly under the CL syntax demo. Here’s why. I’ve poured over kernel logs, I’ve typed, and retyped, the command argument until my face turned purple, and my head nearly exploded. All of this because I did not carefully enough read, the explicit, and clear instructions. I was removing an offending directory/folder using the rd/s command at first, with no result what-so-ever. In the end, and considering this was indeed a folder, rmdir worked for me on XP Pro SP3. For some reason yet unbeknownst to me, manually typing-in the correct file path produced zero results. I encountered the same error that I was trying to go-around. At this point, using the ‘tab’ key to auto-complete the file-path directly following the last directory before the actual name of the offending directory/folder, is critical. Again, for some reason, allowing the file-path to auto-complete solved the issue, and the offending directory/folder vanished into the abyss. This was in fact the result of a ham-fisted Mac-related FTP event, precisely as surmised. Which of course is preventable on the Mac-server-side, doi. So, this article is the perfect solution to a file/folder/directory that won’t delete due to the ‘can’t be found’ error. Thank you so much for this! Totally worked perfectly! 1. Duncan Smart says: Thanks for your pleasurably eloquent comment 🙂 384. tc says: I had a file named “gp.” on my desktop. Your trick seemed to work, but then my recycle bin showed it had something in it. I opened the bin and there was nothing showing. I tried empytying it and it asked me if I wanted to delete ‘Windows’, and when I said yes, it told me it was write protected and could not be deleted. Now I can’t get whatever it is out of the recycle bin because it’s invisible, and I can’t empty the recycle bin when I put other stuff in there. 385. Dan says: I’m still getting the Could Not Find error in the command prompt window. I have nine of these files ending in a “.” on my desktop. I’ve used the tab technique to locate them, but still get the error “Could Not Find file”. I also used the %USERPROFILE% trick, but still get the same error. I’m not sure what I’m doing wrong. I’ve tried this from the C: file, and from the C:\user\Desktop file location, but no luck. Any I’ve got Windows XP. Any help would be appreciated. Thanks. 386. Dan says: OK, so I went back through and tried Keith’s short name trick from Sept. 20, 2008, and it worked beautifully. I suggest adding that to the original post at the top. I hadn’t seen it until I started going through all of the posts (which are quite a few at theis point). I guess that’s a testament to how widespread the problem is, and how poor Microsoft’s support is. Oh well, no surprise there. Thanks again, Dan 387. RoCkAmI 88 says: @ Aleksandar Thanks Dude Easy & Efficient THANK YOU ALL OF YOU 🙂 388. Jimmy says: Thanks for the solution to this. A few folders have been bugging me for years, although I only just thought to google it. When I have a job I will contribute to your beer fund! 389. JG says: IT DID NOT WORK BY THE WAY CUTE! 390. JG says: FORGIVE ME, I THOUGHT IT DIDNT! 391. WODBON says: AWESOME. i HAVE xp sp3 and this worked del “\\?\C:\Documents and Settings\Administrator\Desktop\Garmin ” Make sure to copy and paste the file name. THX A MILLION 392. Jazz Trai Yin says: for my case..deleting the file and waiting for the error to come out “Cannot Delete : cannot read from the source file or disk” then right clicking the file and open with: notepad deletes it.. not sure if the file is there but i saved it on desktop and it doesnt seem to be here anymore.. 1. Jazz Trai Yin says: hold on DONT try this.. it might change your default files to open as notepad and you got a real problem at hand.. 393. Angel says: C:\Documents and Settings\user>del vaginaboob. Could Not Find C:\Documents and Settings\user\vaginaboob. 394. Angel says: OMG! IT WORKED! thank you sooooooo muchhhh<3 395. Rob Neal says: I downloaded a couple of zips that contained Mac alternative files. Now they will NOT move. Unlocker refuses to show up in the shell, and the DOS CMD prompts just either continue to say “file not found” or “Wrong syntax” However, I am using Windows XP64, so I wonder if that makes a difference. 396. Rob Neal says: Well that was quick! I never even thought to try this, but did it as a last ditch desperate attempt. I suspect some filenames have hidden characters, so from the command prompt, use DEL XYX*.* where XYZ is the first few letters of the filename. Et Voila! It worked! This might also work with Duncan’s suggestion using the \\?\ prefix. 397. Steve says: Thank you so much! After downloading an order confirmation as an image, I found a bunch of additional files in My Pictures folder (Xp/sp3) with no extensions after the dot. Googled the “cannot delete..” error msg. and found your del cmd remedy “\\?\…” as the top Google link. It worked immediately! 398. Tim says: Thank you, your solution was really helpful and saved a lot of my nerves 399. Kostas says: This is without a doubt the most useful and simple explanation of a problem solution ever! love it, thanks a lot! 400. Reis~ says: Thank you so much for this info!!! WHEW!!! 401. nick says: This came in handy, on some files had an access denied on the file server. Launched a command prompt as the original user and was able to finish deleting them Thanks for the post! 402. joe says: The file I want to delete has a name that is too long so when I try to paste it into command my computer beeps a whole bunch and it wont all go in 1. Thank you Thank you!!! says: I was having a similar problem of having to annoyingly type out a long file name and always getting it wrong or it not allowing me to paste in the command prompt. HOWEVER, one of the individuals here suggested pressing the “TAB” key which completes the rest of the file name for you once you’ve typed in the first few letters/characters so you don’t have to type the whole thing out!! Not only does this accurately type out your file name, but it’ll save you the headache of not having to paste! hope this helps! 1. Duncan Smart says: Yes, that tip’s in the article: “Tip: as you’re typing the file/directory name use the TAB key to auto-complete the name (press TAB repeatedly to cycle through possible names).” 403. Thank you very much! It worked!!! 404. Mattias says: Thanks alot, this really helped me! 405. Peter says: Great! Worked like a charm! For those for whom it didn’t work – don’t forget the open and close quote! I wan’t including those and was getting the error a lot of you describe! 406. Bala says: Worked like a charm, I will definitely sleep well tonight!! 407. alex says: I’ve tried the solution deleting files off a USB drive and have had no luck. Any one have a suggestion? Thanks, 1. Duncan Smart says: Backup the files you want to keep and the reformat the USB drive (right-click the drive in My Computer). 1. Avakai says: MY GOODNESS! THANK YOU THANK YOU THANK YOU THANK YOU DUNCAN THANK YOU A LOT!!! i’ve had this problem for days now… i backed up my files and reformatted the usb jus like you sed and it worked! for those who don’t know how to do it, go to: My computer >>> right click the USB drive >>> click ‘Format’ >>> choose the options you want(i suggest choosing the best ones.) >>>> Press ‘Start’ >>> “all data will be erased…”… press ‘ok’ (PS: REMEMBER TO BACK UP THE FILES YOU WANT) 408. SpyPower says: Oh thank you. I couldn’t think of another way of doing this. Damn NTFS format. Also, if that’s the case, you can remove those files from FTP, sftp, etc etc, Or any network path. Either way, THANKS 409. Xlnt says: del /ah “\\?\D:\path to file” work like a charm for me. Thanks guys!!! 410. Thank you Thank you!!! says: THANK YOU SO MUCH!! AMAZING Advice!!! totally worked at got rid of my annoying long-extension file!!! 411. Sanjay says: Thnx a lot mate !! 412. hellYEAH says: thanks sooo much!!! Thanks for the tip my friend! 414. Atlast says: Sophos Anti-Rootkit did the trick! 415. Ike says: Thanks, it worked ok. User is happy. 416. Alex says: awesomeness. I had a little file whose name had [brackets] in it, and was very long… whenever there is a file I can’t delete, it brings back unhappy memories of viruses, mysteriously reappearing startup items, safe mode, complete windows reinstalls… AARrrrgh. I figured it was just the brackets or the length of the name causing the problem, but still, I wanted it gone! By the way— copying and pasting the exact file name is a good idea– at first the method above didn’t work for me, but copying the file name revealed one little space at the end of the file name– “filename.tmp ” <– one little space makes all the difference to these nitpicky machines. 417. Darren says: Wow, This was a very quick and easy fix, I couldn’t believe the file had actually gone. Many thanks. Darren 418. anthea says: i’ve tried the unlocker and even format my USB flashdrive and the command prompt but its not working!! pls help me 419. DonySmith says: thanks! this really helped… thank God this post is still here. 420. Lee says: Wow, love the Internet. This was my exact problem as well, though on a Windows 2003 server. Used your advice and problem solved. 🙂 421. Me says: I had no trouble using both commands in the example. I just used what the actual paths were on my pc after the C:\ 422. Zcippy says: Thank u so much!!!! You should get employed by microsoft! ;D 423. Ed says: I’ve been trying to delete a folder and kept getting the “Cannot read from source” message. I spent about three hours searching websites for an answer to this problem. Booted up in safe mode, tried dos commands (rd and redir) over and over, tried to move, rename, delete, etc, in Explore – all to no avail. I was going to format and reload the drive shortly and was really aggravated. Then I found your WONDERFUL website. All I can say is thank you very much!!! And I sent you \$25 via paypal – you earned – thanks again… 424. Charles says: THANK YOU! 425. george says: if once you manage to remove the initial files using the above sollution (ta for that btw) and you still have hidden temp files left that you can’t get rid of, renaming the directory from CMD will do the job. hope this helps 426. Ravish says: Thank you! 427. Bart says: Yihaa !! Wonderfull What a tip Thanx !! 428. Daniel says: Thank you! 429. Lawson says: thank you so much…this helped me alot 430. Hafizah says: Thank you! =) 431. Thanks a lot the file has been annoying me for a year. My first attempts failed(using XP). The Tap trick did not work with the . My file was on my second hard drive where my My Documents resides. In command prompt I first went to my second hard drive enter, and D:\> appears on the next line. I then hit tab untill the folder I wanted came up “My Documents”>, I then typed in the back slash “My Documents”\> and hit tab again untill the undeletable file came up “My Documents\Fw_Life After Road Runner.”> I then ran the curser back and typed in the del\\?\ like so del \\?\D:\”My Documents\Fw_Life After Road Runner.”> pushed enter and to my disbelief it worked. I have tried many times before typing in what eventually did the trick and got cannot find the file. The Tab was the key. Thanks again. 432. dante says: You’re brilliant. I couldn’t figure out how the he– to delete a zip folder that contained one buggy file. Damn, I sure miss the days of C:\ press any key to continue… 433. jason says: Thank you so much! it worked! 434. Worked like a charm on a folder created from a ZIP that came from a MAC. This had some weird trailing space character, not a period, but the \\?\ stuff worked great. Thanks! 435. Kandygirl says: OMG THANK YOU!! I had 2 empty folders stuck in my documents that were a result of failed encoding of torrent files. I thought I would have to look at those things from now on- how annoying! I used the rd /s “\\?\ tip from your post only mine was a space instead of . at the end. Worked PERFECT! Can’t THANK YOU enough! Thank God for computer geniuses like you!!!! 436. drew says: PERFECT!!! Works for me, thank you very much. 437. jordan says: Thanks for the tip!! 438. lime says: hi could you help me with this please. I tried what you said on the above instruction but this is what it said in the prompt command. http://i48.tinypic.com/4lg2g1.jpg It said that the system cannot find the file. 1. Duncan Smart says: Use the TAB key tip mentioned to auto-complete the file path. 1. lime says: done! XD I’ve missed some of the process. I didn’t know that I can drag the item to the cmd instead of manually putting the file’s address. Thank you very much! 439. 저 제가 지우고 싶은게 있는데 잘안지워 지더라구요 440. Andrew says: Yes! My desktop has 44 of these files on it and they have really been annoying me. What I have been doing it every time I turn my computer on, I would grab them all and scoot them off of the desktop. But now I can begin the slow, arduous process of deleting them. Thanks! 441. Christian Vestergaard says: Thank you very much for this help. I had to try it several times before I got the ‘\\?\’ in the right place but when I got it right it worked well. Thank you. 442. Jos says: Thanks, A wonderfull simple solution. In a few seconds I wiped a lot of annoying ‘non existing’ files shown for years in my documents folder. How one person can be smarter than sum the Microsoft heldesks all over the world….. 443. Hi all of you i get this error if some one can help me please please Cannot read from the source or disk. When i put my cd in to cd rom i use to get this error and i want to copy the file to my hardisk but i can not be copy help me plse guys ..email kandasoo@hotmail.com 444. Youssouf Salah says: Thank you very much man you have helped me so much with the annoying files I had and Thanks again for the last evil trick 445. Kenneth says: thanks for the help! 🙂 446. Atropos says: I had to use the /a technique after the final quotation mark. What a relief! 447. Dave T The Don says: Hi Tried what you suggested, but couldn’t dig down to the d drive in cmd, it keeps on returning to c:\ when cd d:\. But I got to thinking, if it was a mac file left over why not delete it from the mac!!! Heypresto, the macshare found the file and I deleted it using the mac. 448. Thomas says: I’d like to suggest one other solution (maybe it’s mentioned somewhere in the 528 previous comments). Download and install the process explorer, http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx Select “System” and look for a file handle for the directory or file you’re trying to delete. You’ll see what I mean. Right click on it, and select “close handle”. The dir/file should be removed automatically through this. If you don’t find the dir or file in “System”, try finding it under “explorer.exe” instead. Best of luck! /t 449. Chulwoo Lee says: Thank you so much!!!!!! i finally deleted my annoying file for god sadk!! thank you again and hope u have a nice day!! 450. SGerber says: I have directory and subdirectory with a trailing space and nothing worked for me ! And I find a solution : and type on a command prompt : sdelete deleted all the subfolder with a trailing space I’a on Windows 2008 64 bits YES it works ! 451. steven farmer says: This didn’t help I hoped it would but I still can’t copy the file from my portable hdd to my friends laptop I’ve tried renameing and changing all the properties please can you help 1. Duncan Smart says: Look at using the rename command (look for the tips above on using the TAB key to get the filename right): e.g. rename “\\?\C:\foo\bar\myfile.ext” mynewfile.ext 452. Perng says: How should i change to the command so i can copy the file from my external hardisk to my laptop? it says “Cannot copy file : cant read from source file or disk. thanks so much for ur time 🙂 1. Duncan Smart says: Try using the ren command first: e.g. ren “\\?\C:\blah\de\blah.ext” newname.ext Remember to use the TB key trick mentioned to auto-complete the path as you’re typing. 453. Kirtan Doshi says: Thanks a lot mate! that was an invaluable lesson. 🙂 Ur da Man! 454. Aaron Reciproco says: thanks for your command line, dude! I managed to delete one annoying file on my desktop. You’re the MAN! 455. Taieb says: THANKS a lot Man, really you’ ve helped me so much, you are the man! 456. LMB says: Hi! I would just like to say thanks. I followed this steps and successfully deleted a mp4 file that has been stuck in my desktop for quite some time now. THANKS A LOT!!! 457. Rex says: Add another to the list… tried MoveOnBoot, booting into Safe Mode and setting permissions for the whole folder to delete one file. Thanks 🙂 458. Katie says: Win!! Thank you so much. I finally got rid of the file that was preventing me from fixing a drive. Cheers!! 459. Jesus… I can’t thank you enough for delivering the solution. I’ve never seen this before… it happened following a program I installed yesterday and as I directed to use a data folder named aaa it created another folder named aaa(dot) Rename, forget it It kept coming back following a refresh Anyway, this solution worked and UNlike some other M.F. toolheads out there… this solution did NOT delete my entire drive. Many thanks. Many blessings on Stuart for providing this assistance. Peace Telemental 460. jon says: thanks very much! i’ve finally gotten rid of those annoying files i couldn’t delete!!! 461. Hemant says: Thanks! 462. Sakthi says: Thank you very much. i have finally resolved. great stuff. 463. Catt says: Thank you! Geez that was annoying. Now it wouldn’t let me delete the file per-say, however the folder deletion worked just as well. thanks so much for the help! 464. SKumar says: Thanks, that works! 🙂 465. Wow… thanks so much brotha, my problem solved… many thanks 466. sujay says: thank u .. the command helped me to remove the file from E: drive which was earlier giving a error msg when try to delete.. thank u again 467. JB says: Genious. worked like a charm. THANK YOU! 468. David says: I have tried everything reading down through this thread but unless I missed it I have not found what to do when the computer days can not find “del”. I have copy pasted the entire string to put at the dos prompt and then pasted in the path to the folder that I can not delete, but all I get is “can not find “del'” Thank you! 469. Ivan says: hi i need help this file is a virus and i cant remove it. cant delete it and kav cant delete it too … i get this error Cannot read from the source file or disk what should i do ? C:\WINDOWS\SYSTEM32\DRIVERS\xilymjgq.sys 470. babi says: thankx duncun it works well worked ur this solution @vassos: One thing that may help is to use the TAB key to auto-complete the file path for you. Type the following (note the \\?\ at the start): …then press TAB until your filename appears, when it does, press ENTER. 471. Bvvood says: Thanks for this! I’ve created a file in remote server through and ftp program with a dot at the end of filename. Once I’ve created it, it suddenly cannot be deleted. Thanks to your tips, I’m able to remove this file. Damn windows bug! 472. Chris says: This helped me, thank you! 473. Michael Trahan says: Thanks a Billion! I am very experienced, but self-taught; and I was stuck with a “file”/non-file thing on my XP x64 Desktop for a few months. Finally I thought to search the error message (duh!, but guess I was lazy at first and my desktop was messier then anyway). Well, I am posting this for thanks and two other reasons – for all reading with this kind of issue: 1 – For me the file ended with a space; and not a period; and thought I would spread the word this still worked! 2 – Also, I read above some had problems with copy and paste. I too had to re-learn how to do this, but you should be able to do it simply with the windows command prompt (well, at least mine, see version above). I use the portable one from portableapps.com, and also discovered it to be tricky. It seemed to keep “forgetting” or something, what I had copied into the clipboard (which was the entire command written in a text editor to include everything (of course)). I had to carefully left click the very top left of the prompt window to bring up the context menu which included Edit->Paste. Maybe mine was acting up for some other reason, but I think this may help others; keep trying. So happy that “file” is gone! 474. Pasquale says: I successfully solved the problem “Cannot delete file:Cannot read from the source file or disk”referred to a file on desktop, adopting the following steps: – go to prompt command – write DEL “\\?\C:\Documents and Settings\Administrator\ name_of_file” – clic two or three times Good luck to everybody. Pasquale, Firenze, Italy. 29 Luglio 2010 475. Pasquale says: Errata corrige: bye P. 476. Rupert says: The \\?\ trick worked for me. I also had a file with a “dot” extension that no other method would remove. (Short names disabled.) 477. Sudhakar says: Thanks a lot it worked for me 🙂 🙂 478. Stewart says: Great stuff thanks! Worked as mentioned above for files created by a MAC with a space at the end. 479. Abigail says: Great minds at work! Thank you so much. 480. THANK YOU! That annoying directory is now GONE GONE GONE!!! THANKS!!! 481. USB says: For all you with a USB, if you have a file on it that you can’t get rid of, just format it… 482. Vicky says: Ok, I have a similar problem. It’s the same cannot read from source file or disk” file type that ends with a “.”. However, it is sitting in the temporary path of my CD writer…I guess it is owned by the “D” drive. It is blocking my ability to write to the CD, and I get not delete, move, or do anything to it. I can not access the file in any manner….but I am not proficient in DOS. So, how can I get to this file to delete it? Thanks, 1. Duncan Smart says: What version of Windows are you using? What do you use to write your CDs? 1. Vicky says: I am on windows XP on Dell hardware with an internal CD wiriter. I am not doing anything special writing to CD’s, just burning files to a CD. I normally archive photos, word docs, etc on CD’s and was trying to send a word doc file to the CD when this file got stuck or created. The file name is a two word name, followed by a period. 2. Duncan Smart says: The thing to do is try and find where the temporary files are kept. Have a look in the user temp folder (Start > Run > “%TEMP%” > OK) and try and clear everything out of there – hopefully you will then find the offending file and you can use the Command Prompt to delete it using the tips above. THANK YOU!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 484. Vincent says: This is so useful and easy thanks…. 485. Distortion says: Doesnt work to me…. ” Could Not Fine Annoying File ” 486. soli says: Hey buddy that was great, thanks a ton! 487. Vicky says: Sorry to be late responding, but i was out of town. OK, I am over my head here. There are a lot of tmp files and folders there in the temp directory; nothing with the name of the folder I am trying to delete nor none with the same creation date. Am I supposed to delete them all, or am I looking for my bad file? If I am looking for something that has the properties of the bad file, it is not there. (Also, I previously tried to delete it from the Command prompt, thinking it was in the “D” directory, and the error message was “file not ready.”) Thanks, 1. Duncan Smart says: Actually I just did a Google for the temporary CD burning folder in Windows XP and it appears to be here: C:\Documents and Settings\User Name\Local Settings\Application Data\Microsoft\CD Burning Maybe the file in question is in there? 488. Vicky says: Found it!! HOORAY! Now to delete it; My CMD default setting is Documents and Settings\Vicky> My bad file name is BlockNewman. So, after the CMD prompt, I tried: DEL /Local Settings\Application Data\Microsoft \CD Burning\BlockNewman.\ The message I received back was: Invalid Swith – “Local” I have tried various other strings, including using “\” at the beginning of the string, putting the entire string in quotes, etc., and the messages are: The system can not find the file specified So, I obviously don’t know how to delete from the CMD prompt! Suggestions? Thanks, 1. Duncan Smart says: Should be something like this (all one line, and the quotes are important): del "\\?\C:\Documents and Settings\Vicky\Local Settings\Application Data\Microsoft\CD Burning\BlockNewman." As you are typing you should be able to use the TAB key to auto-complete stuff as you go. If it’s a folder then use the rd command, e.g. rd /s "\\?\C:\Documents and Settings\Vicky\Local Settings\Application Data\Microsoft\CD Burning\BlockNewman." 489. Vicky says: Thanks for the suggestion. I tried the command for the file delete as you outlined and the message is network path not found. When I try the folder command as you suggested, I first get the are you sure message, and when I respond Y, the message is:The system can not find the path specified.” When I look at the file, it looks like a file, not a folder…and I double and triple checked the file type, so I know we have that correct. 1. Duncan Smart says: Just make sure you type the del command correctly, computers are very unforgiving: * Important bit to get right is \\?\C:\ at the start * remember the quotes * make sure you’re using backslashes, not forward slashes, * use the TAB key to auto-complete. 490. Jeff says: How do you use this syntax to copy one directory (with multiple internal directories) from one drive to another drive? I have tried every variation of \\?\ and cannot get anything going. 1. Duncan Smart says: Something like: xcopy “\\?\C:\my source directory” “\\?\C:\my destination directory” /s 491. bts says: Brilliant tip……………………… thanks 492. Dave says: Thank you so much this allowed me to clean up a hard drive I was having trouble with! 493. 421 says: thank you, worked perfectly. 494. thank you very much for a very simple yet effective technique!!!! 495. Aries Mars says: S 496. Aries Mars says: Dear Sirs; Firstly, Please accept my apologies for not understanding what The HTML tags and attributes below actually do? I an certain that ithey are for making a post more convenient? Please excuse me if my lack of knowledge regaerding them, creates an inconvenience in your Fine Posting area? Well,..now,… as to why I’ m actually here, & Posting!! Your method in putting the del \\?\ in front of the Prompt worked! YEEEE – HAWW!! :o) FRIGGIN ! Un – Believeable!!…I had to try it a few times ..but only because I’m a Dummy!… and have never used this Prompt Program before! I now realize that I had to use a Capital C:\ immediately after the del \\?\ and ,……I finally figured out !!….as well that that I needed to place a period and the ” at the end of the fIle name ,…even though the File that was permanently stuck to my desktop, did not habve the period. or the ” !! I looked at the prompt, thinking that it would once again state something to the effect that it couldn’t find the path~! But to my Delight!!! Mister Duncan Smart’s Advice was worthy of my complete attention!! So I Thank you Sir! For a Very Much Appreciated Method ( that Indeed Works! ) That Cursed File …Had sat immovable…on my desktop for the last two years or so!! When I Glanced down at my Desktop,…immediately after I had succeeded in Moving it,….I Did a Little Dance! And then I noticed that my Desktop Looked Strangely Organised again!! Yes! that Little Unwelcome File, had stayed it’s last day on MY Property!! HA-HA! :o) Over the Last Two years…. I had tried just about EVERYTHING!…BUT TO NO AVAIL! …..UNTIL NOW!!! I tried changing the File’s extension’s …tried renaming the File…..changed it’s “opens with”…numerous times! I have used Different …….”File Shredders”……….and I have used a couple different File “Unlocker” Programs…and used my Window Washer program with specifically – directed washes geared towards bleaching it out!!…I have even used 3 different registry editors thinking that they would delete it after they discovered that it no longer had a specific “target”…….I have also made a shortcut to it thinking that I could then just delete it’s “target”…..but all to no avail!!! ……Until Tonight!!! yeeee!!!—HAAAAAAA!!! :o) I have even changed the Files extension to one that only one specific Media Player could Play!…and then deleting all those types of files…and then as well , the corresponding media Player itself!!!! Yes! Just about every trick in the Book! …..That I knew of or heard of, anyways!! So,…I just wanted to Personally Thank you!!! Mister Duncan Smart!!… 497. Aries Mars says: OOOpps ! I got cut off before I finished typing!!..I juist wanted to add …that anyone would be foolish to not try your Method! Mr Smart!….As you Truly Must be A Smart Mister! Thank you very Much for enabling me to be free of that damned File! ……….Finally! :o) Sincerely, Aries Mars 1. Duncan Smart says: You’re welcome 🙂 498. Bev Timerding says: Amazing! Have tred all sorts of File Assassin variations to no avail and voila – your solution worked beautifully. You are still getting thank you’s over 2 years later; pretty impressive. 499. Erwin says: I tried this but must admit to getting confused as to whether to use the quotation marks or not; and I couldn’t past into the command prompt [DOS and I seem to have a mutual hatred for each other]. Having had Unlocker on my system for over a year, something Alec said reminded me that I could use that by right clicking on the obnoxious file [I’m used to it popping up to lend a hand and it never volunteered its services with this file]. So I did exactly what Alec did and it died [a hopefully painful]!!! Thanks for the help I don’t think it matters how it worked, so long as it does. Other people’s comments are often just as useful as the original post ……. Thanks for starting this post, Duncan. Did you ever think you’d still be helping people with the same issue 2 years later?….. Amazing that you haven’t just dropped it to go on without you. Cheers to all. 500. richman says: THANKYOU THANKYOU A PROBLEM FOR THE LAST 2 YEARS HAS BEEN SOLVED BY you ARE A LEGEND THANKS 501. thx for the original thread. this helped steer me the right way. BTW, if the “del \\?\” trick doesn’t remove the file since it cannot find it, try the “rmdir /s \\?\” or “rd /s \\?\” trick, which removes the entire subdirectory and offending files. this worked for me. another common problem with these rogue files is you dont have permission to access them. in this case, you need to assign permissions with a stronger utility called “subinacl”, whcih can be downloaded from microsoft website. 502. oggmeista says: dont know if this will help but I tried my own way of deleting these files and it worked!! What i did was create a folder moved all the other files in the problamatic folder into this other foler then I moved it to another directory on my harddisk then i goto the probem folder and copy the address bar for the folder opened command prompt and typed cd (right click paste then Enter) then i simply typed del *.* (then enter) files were gone!! 🙂 hope maybe this might help someone else 1. Kevin says: Yeah, this worked for me when I could not get the other to. Thanks a million. 503. marvin says: THANKS ALOT. 504. Ty says: I am having the same issue with a file on my desktop, however this method does not work for me! I tried typing the file name and path directly and it says it does not exist, even when I use tab to cycle to the correct file it says cannot find file when I try to delete it. Looks like I’m stuck with these 5 files on my desktop. 505. gocoogs says: Great help, thank you! For those of you that are still having problems, an alternative solution is to install cygwin (http://www.cygwin.com/), change to the directory containing the problematic file (or the parent of the problem directory), then issue the command: rm “PROBLEMATIC_FILE/DIRECTORY_NAME” -rf use the problematic file or directory name without quotes. 1. Fidskwizard says: Excelllent. Linux commands for the win! I had tried a bunch of things, and nothing worked. I was doing it remotely like this, on a folder…. RMDIR ‘\\Server\share\folder\BADfolder1 ‘ Note the space at the end… that seemed to be part of the issue. 506. Rorz says: hello sir in my deskstop there’s a file which is somewhat a virus or what else do you call this which i cannot delete…. same as yours it prompts “cannot delete file:cannot read from the source file or disk.” I tried using the command which you have post but it does’nt work…. thanks. 507. Jayesh says: Hi Two annoying files in C Drive – “½îG⌐╧ÄΣ.” (Size : 133728K; 16-Sep-10), “g” (Size 200k, no datestamp) Since I have moved these files to a new folder under root, I don’t remember the original folder. Tried every solution on this page. But of no avail. Jayesh 508. Scott_ says: Thank for the information above. It works! 509. bam says: At first this didn’t work because the folder I was trying to delete was one I created in the Documents and Settings folder, but it wasn’t a user folder (long story). So when I typed the “rd /s “\\?\C:\Documents and Settings\Annoying Folder” -patterned command it said it couldn’t find what I wrote. Whereas I couldn’t move the annoying folder before, I discovered that I could move the containing folder up one level, and since none of it was inside the Documents and Settings folder and expected to be treated like a user folder anymore, I could just type “rd /s “\\?\C:\Annoying Folder” and it was gone! I tried researching this on other sites, but their solutions seemed too complicated (since I’m not too techy). But this was pretty easy. THANK YOU, THANK YOU, THANK YOU! 510. aethan says: It worked for me! Thanks a lot 511. Anon says: It Worked! I had corrupt files that have been sitting there for about a year. I tried a few programs that didn’t work but a simple command prompt did. Thank you so much! Had a file ending in a dot that could not get removed. Using \\? did not work for me. Unlocker was able to remove it however. 513. Excellent article! I’m glad it didn’t take me years to figure this one out thanks to the info here. I was having trouble transferring files across the network from a Mac and it happened to be the ‘.’ at the end. I simply renamed the file and bam! I think the dot was added when I extracted a rar set. 514. mike says: worked for me THANX!!! 515. DB says: Thank you so much for the fix to this annoying problem. I was just clearing my Favorites before passing on an old netbook in the family and came to one with this issue. It looks like I’d saved the favorite with a name ending in “…” to be cute. Why would IE let me do that if it’s an issue? lol Anyway, your “rd’ command option did the trick. Thanks heaps! 516. EMM says: Thank you!!! That file has been screwing up my backup protocol for ages. And it was exactly as you said, an old Mac file with a period at the end. Hooray! 517. Eriyu Snow says: Thank you so much! Not only did this work, but it’s both informative and concise. ^_^ 518. Krishna says: Thanks It Worked 519. Amit Gill says: HI Duncan, I’m facing the same problem on my WD external drive. I have a backup folder which is showing the files with wired names ex: ∙╚Üæ⌐ █ . σÿ . I have moved them to a folder called JUNK but I cannot get rid of these files as the error : “Cannot Delete file : Cannot read from the source file or disk” come up while deleting these files or even the folder called Junk. I have tried all the up-listed commands but no results, it keep throwing the error : ” ∙F:\JUNK4-13-10\Delete\Junk\Junk-1\#<εδƒZ§Ö.R^Φ – The filename, directory name, or volume label syntax is incorrect. " where F: is my external drive. P.S: My e-mail is andy81p@hotmail.com 1. Duncan Smart says: Have you tried: RD /s “\\?\F:\JUNK4-13-10\Delete” Alternatively, back up your external drive and reformat it. 1. Amit Gill says: That seems to a final option. Didn’t wanted to opt for that cauz of the data and space issues. F:\>RD /s “\\?\F:\JUNK4-13-10\Delete” “\\?\F:\JUNK4-13-10\Delete”, Are you sure (Y/N)? y The system cannot find the path specified. F:\>RD /s “\\?\F:\JUNK\4-13-10\Delete” “\\?\F:\JUNK\4-13-10\Delete”, Are you sure (Y/N)? y The system cannot find the path specified. F:\>RD /s “\\?\F:\JUNK4-13-10\Delete\Junk” “\\?\F:\JUNK4-13-10\Delete\Junk”, Are you sure (Y/N)? y The system cannot find the path specified. Thanks! 2. Duncan Smart says: Seems as though you might be getting the path slightly wrong. Use the TAB key completion tip mentioned above. 520. Amit Gill says: No luck! 😦 I tried all the simply using copy-paste. I have it now as : “F:\JUNK” inside the JUNK folder I have 2 another folders named as Junk-1 and Junk-2 (all the wired files are sitting in these 2 folders). I have used the commands as : RD/s “\\?\F:JUNK\Junk-1” (with and without spaces) No Luck RMDIR /s “\\?\F:JUNK\Junk-1” I tried both commands with and without coutes too, but no luck! Any help is much appreciated. I too much frustrated with this space being blocked! 521. jim says: OMG thanks, i just got rid of a dir like that. It seems you need to use the full path also for this to work. ^ Armit you are missing the “\” right after the drive letter. try “\\?\F:\JUNK\Junk-1″ 1. Amit Gill says: Thanks Jim! but the problem was solved after I was able to run CHKDSK on my disk, once the PC came up the folder was not there. don’t ask my how. but i appreciate all the support! Best Regards Amit 522. Tony says: I don’t understand any of this. Please help me. I am having a similar problem to what Amit described, but this all looks like gibberish to my dyslexic eyeballs. 1. Amit Gill says: Tony, Did you tried running CHKDSK without /F from command prompt ? if no try doing that first and hit yes to what ever the command ask. once you are done and see the disk result, the should fix the corrupted folders. Once you are done with that try doing the same thing again but this with /F functionality. to do so you can follow below : Step 1 : 1.Go to start 2.Run 3.type : CMD in the command prompt window type : CHKDSK and hit yes to everything Step 2: 1.Go to start 2.Run 3.type : CMD in the command prompt window type : CHKDSK/F and hit yes to everything. This will take several minutes (depending a the PC). Once you are back see if the folders still exist or not. If yes then you can follow the command : RD/s “\\?\Drive Letter:\Folder name″ or RMDIR /s “\\?\Drive Letter:\Folder Name″ I wish this helps. Good Luck! 1. Tony says: I get the message “Unrecoverable error in folder … would you like to convert folder to file?” I am not sure what this means. 2. Amit Gill says: If it is about the corrupted folder then YES go ahead and hit yes.. anyways what CHKDSK will do is to check the disk for errors and windows will try to fix those errors 3. Vassilis says: Amit Gil Thank you soooooooooooooo much !!!!!!!!!!!!!!! Your suggestion worked !!!!!!!!!!!!!!!!!!!!!!!!! 523. Jaemz says: Google a program called Delete FXP Files. This program can delete the file 524. Thank you, thank you, thank you! In backing up files manually, a folder was created in a large nest of folders; I could get rid of most of them except one way down in the directory. Nothing removed it, except your solution. Yea!!!!! 525. David BYrnes says: YOU ARE A GOD!!!!!!!!!!!!!!!!! Thank you. This just happened, and was driving me crazy! 526. William M. Reed says: This thread helped me figure out the problem. The solution CAN be easier, if you have a Mac still on the network to which you have access. Go to the Mac, and try deleting the file from there, as you would any other file. Being a lowly Mac user myself, all that CMD stuff confuses me. Fortunately, I rarely need it…unless I’m using a PC. 😉 WMReed 527. Marius Vasilescu says: It worked for me too. Thank you!
2021-10-16 19:11:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5876744389533997, "perplexity": 4299.739556632486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00326.warc.gz"}
https://couryes.com/%E6%95%B0%E5%AD%A6%E4%BB%A3%E5%86%99%E6%95%B0%E8%AE%BA%E4%BD%9C%E4%B8%9A%E4%BB%A3%E5%86%99number-theory%E4%BB%A3%E8%80%83math-1001/
## 数学代写|数论作业代写number theory代考|MATH 1001 2022年7月26日 couryes-lab™ 为您的留学生涯保驾护航 在代写数论number theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写数论number theory代写方面经验极为丰富,各种代写数论number theory相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 couryes™为您提供可以保分的包课服务 ## 数学代写|数论作业代写number theory代考|Wilson’s Theorem John Wilson (1741-1793) was an outstanding British mathematician at the University of Cambridge. However, the following theorem that bears his name was not discovered by him. Theorem 3.6 (Wilson) A number $p>1$ is prime if and only if $$(p-1) ! \equiv-1 \quad(\bmod p) .$$ In 1770 Edward Waring first published in Meditationes algebraicae on p. 288 the implication $\Rightarrow$ without any proof and attributed it to John Wilson. He literally wrote that if $p$ is a prime number, then the sum of $(p-1) !+1$ is divisible by $p$. At that time, the concept of congruences had not been introduced yet. According to Hardy and Wright $[138$, p. 81$]$ the implication $\Rightarrow$ was already known to Gottfried Wilhelm Leibniz (1646-1716) in somewhat modified form. The converse implication $\Leftarrow$ was later proved by Joseph-Louis Lagrange in 1773. Therefore, sometimes Theorem $3.6$ is called the Wilson-Lagrange Theorem. Let us further note that the assumption of $p>1$ is omitted in many textbooks. For $p=1$ the congruence (3.5) is satisfied, but 1 is by definition not a prime number. Proof of Theorem 3.6. $\Rightarrow$ : If $p=2$, then congruence (3.5) obviously holds. So let $p>2$ be a prime and let $a$ be an arbitrary positive integer less than $p$. By Theorem $3.4$ there exists exactly one positive integer $b<p$ such that $a b \equiv 1(\bmod p)$. From Theorem $3.5$ we get that if $a b \equiv 1$ (mod $p$ ) and $h \equiv a$ (mod $p$ ), then either $a \equiv 1$ $(\bmod p)$, or $a=p-1(\bmod p)$. From this it follows that the integers $2,3, \ldots, p-$ 2 can be reordered as the progression $a_{2}, a_{3}, \ldots, a_{p-2}$ so that in pairs we have $$a_{i} a_{i+1} \equiv 1 \quad(\bmod p)$$ for $i=2,4,6, \ldots, p-3$. Between 2 and $p-2$ there are exactly $p-3$ numbers, which is an even number. Therefore, $$(p-1) ! \equiv 1 \cdot(p-1) a_{2} \cdots a_{p-2} \equiv(p-1) 1^{(p-3) / 2} \equiv-1 \quad(\bmod p)$$ ## 数学代写|数论作业代写number theory代考|Dirichlet’s Theorem In 1837 Peter Gustav Lejeune Dirichlet (1805-1859) published an interesting theorem that uses very sophisticated analytical methods in number theory. Theorem 3.10 (Dirichlet) Let $a, d \in \mathbb{N}$ be coprime integers. Then there exist infinitely many primes in the arithmetic progression $$a, a+d, a+2 d, a+3 d, \ldots$$ A proof of this statement is in the seminal paper by Peter Gustav Lejeune Dirichlet [94]. Theorem $3.10$ can be equivalently formulated so that the set $$S={p \in \mathbb{P} ; p \equiv a(\bmod d)}$$ has infinitely many elements. Moreover, the density of $S$ in the set of primes $\mathbb{P}$ is equal to $1 / \phi(d)$, where $\phi$ is the Euler totient function, i.e., $$\lim _{x \rightarrow \infty} \frac{\mid{p \in \mathbb{P} ; p \equiv a \quad(\bmod d) \text { and } p \leq x} \mid}{|{p \in \mathbb{P} ; p \leq x}|}=\frac{1}{\phi(d)} .$$ A proof of this statement can be found e.g. in Ireland and Rosen [151, pp. 251261]. However, one of the most beautiful and at the same time most surprising mathematical results from the beginning of the 21 st century is the Green-Tao theorem published in Annals of Mathematics, see Ben Green and Terence Tao [126]. Theorem 3.11 (Green-Tao) For any positive integer $k$ there exists an arithmetic progression of length $k$ consisting solely of primes. # 数论作业代写 ## 数学代写|数论作业代写number theory代考|Wilson’s Theorem $$(p-1) ! \equiv-1 \quad(\bmod p) .$$ 1770 年,Edward Waring 首次在 Meditationes algebraicae 上发表。288的寓意 $\Rightarrow$ 没有 任何证据并将其归因于约翰威尔逊。他从字面上写道,如果 $p$ 是一个㸹数,那么和 $(p-1) !+1$ 可以被 $p$. 当时,还没有引入同余的概念。根据哈代和赖特 $[138$ ,页。81]含 义 $\Rightarrow$ Gottfried Wilhelm Leibniz (1646-1716) 已经知道它的形式有所修改。反义词后来 由约瑟夫-路易斯拉格朗日在 1773 年证明。因此,有时定理3.6称为威尔逊-拉格朗日定 理。 $$a_{i} a_{i+1} \equiv 1 \quad(\bmod p)$$ $$(p-1) ! \equiv 1 \cdot(p-1) a_{2} \cdots a_{p-2} \equiv(p-1) 1^{(p-3) / 2} \equiv-1 \quad(\bmod p)$$ ## 数学代写|数论作业代写number theory代考|Dirichlet’s Theorem 1837 年,Peter Gustav Lejeune Dirichlet (1805-1859) 发表了一个有趣的定理,该定理在 数论中使用了非常巹杂的分析方法。 $$a, a+d, a+2 d, a+3 d, \ldots$$ $$S=p \in \mathbb{P} ; p \equiv a(\bmod d)$$ $$\lim _{x \rightarrow \infty} \frac{\mid p \in \mathbb{P} ; p \equiv a \quad(\bmod d) \text { and } p \leq x \mid}{|p \in \mathbb{P} ; p \leq x|}=\frac{1}{\phi(d)} .$$ ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
2023-03-22 20:08:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183186292648315, "perplexity": 961.5301349322807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00452.warc.gz"}
http://math.stackexchange.com/questions/74860/simple-characterization-for-the-complement-of-a-kernel-of-a-linear-transformat
# Simple characterization for the “complement” of a kernel of a linear transformation? Let $V$ be a finite-dimensional vector space and $T:V\to W$ a linear transformation. Take $\{v_1,\dots,v_r\}$ to be a basis for $\ker T$, and complete this set into a basis $\{v_1,\dots,v_r,u_1,\dots,u_m\}$ of $V$. Taking the spans of $v_1,\dots,v_r$ and $u_1,\dots,u_m$ separately, we obtain $V=U_1\oplus U_2$ where $U_1$ is the kernel of $T$ and $U_2$. It's easy to see that the image of $T$ is isomorphic to $U_2$. The question: is there a "nice" characterization of $U_2$ which does not involve completing the basis of the kernel? The kernel itself is defined in a very simple way: $\ker T=\{v\in V:T(v)=0\}$. Is there something similar for $U_2$? - Your $U_2$ is of course heavily dependent on your choice of bases. You could say $U_2=U_1^\perp$ with respect to the unique inner product such that your v's and u's are orthonormal, but that's just rephrasing exactly the same. We do have a canonical isomorphism $U_2\cong V/\ker T$ characterizing $U_2$ up to isomorphism, and which is pretty nice. There is no such characterization and a reason is that $U_2$ is not unique except in the degenerate cases when $U_1$ is $\{0\}$ or $V$. For example, if $V=\mathbb R^2$ and $U_1=\mathbb R\times\{0\}$, $U_2$ may be $(t,1)\cdot\mathbb R$ for any $t$ in $\mathbb R$.
2014-12-22 12:27:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891112208366394, "perplexity": 66.79138929622647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775222.147/warc/CC-MAIN-20141217075255-00077-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/area-triangle-triangle-whose-vertices_5714
# Solution - Area of a Triangle Account Register Share Books Shortlist ConceptArea of a Triangle #### Question Find the area of the triangle formed by joining the mid-point of the sides of the triangle whose vertices are (0, –1), (2, 1) and (0, 3). Find the ratio of area of the triangle formed to the area of the given triangle. #### Solution You need to to view the solution Is there an error in this question or solution? #### Similar questions VIEW ALL The coordinates of A, B, C are (6, 3), (–3, 5) and (4, – 2) respectively and P is any point (x, y). Show that the ratio of the areas of triangle PBC and ABC is view solution Find the area of the triangle PQR with Q(3,2) and the mid-points of the sides through Q being (2,−1) and (1,2). view solution If the points A(x, 2), B(−3, −4) and C(7, − 5) are collinear, then the value of x is: (A) −63 (B) 63 (C) 60 (D) −60 view solution If D, E and F are the mid-points of sides BC, CA and AB respectively of a ∆ABC, then using coordinate geometry prove that Area of ∆DEF = \frac { 1 }{ 4 } "(Area of ∆ABC)" view solution Find the area of the triangle whose vertices are: (2, 3), (-1, 0), (2, -4) view solution #### Reference Material Solution for concept: Area of a Triangle. For the course 8th-10th CBSE S
2017-10-20 21:41:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7065199613571167, "perplexity": 690.3237646301593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824357.3/warc/CC-MAIN-20171020211313-20171020231313-00745.warc.gz"}
https://codedump.io/share/NDdxQXulGj1z/1/issues-with-raytracing-triangles-orientation-and-coloring
Cache Staheli - 1 year ago 101 Java Question # Issues with Raytracing triangles (orientation and coloring) EDIT: I found out that all the pixels were upside down because of the difference between screen and world coordinates, so that is no longer a problem. EDIT: After following a suggestion from @TheVee (using absolute values), my image got much better, but I'm still seeing issues with color. I having a little trouble with ray-tracing triangles. This is a follow-up to my previous question about the same topic. The answers to that question made me realize that I needed to take a different approach. The new approach I took worked much better, but I'm seeing a couple of issues with my raytracer now: 1. There is one triangle that never renders in color (it is always black, even though it's color is supposed to be yellow). Here is what I am expecting to see: But here is what I am actually seeing: 1. Addressing debugging the first problem, even if I remove all other objects (including the blue triangle), the yellow triangle is always rendered black, so I don't believe that it is an issues with my shadow rays that I am sending out. I suspect that it has to do with the angle that the triangle/plane is at relative to the camera. Here is my process for ray-tracing triangles which is based off of the process in this website. 1. Determine if the ray intersects the plane. 2. If it does, determine if the ray intersects inside of the triangle (using parametric coordinates). Here is the code for determining if the ray hits the plane: private Vector getPlaneIntersectionVector(Ray ray) { double epsilon = 0.00000001; Vector w0 = ray.getOrigin().subtract(getB()); double numerator = -(getPlaneNormal().dotProduct(w0)); double denominator = getPlaneNormal().dotProduct(ray.getDirection()); //ray is parallel to triangle plane if (Math.abs(denominator) < epsilon) { //ray lies in triangle plane if (numerator == 0) { return null; } //ray is disjoint from plane else { return null; } } double intersectionDistance = numerator / denominator; //intersectionDistance < 0 means the "intersection" is behind the ray (pointing away from plane), so not a real intersection return (intersectionDistance >= 0) ? ray.getLocationWithMagnitude(intersectionDistance) : null; } And once I have determined that the ray intersects the plane, here is the code to determine if the ray is inside the triangle: private boolean isIntersectionVectorInsideTriangle(Vector planeIntersectionVector) { //Get edges of triangle Vector u = getU(); Vector v = getV(); //Pre-compute unique five dot-products double uu = u.dotProduct(u); double uv = u.dotProduct(v); double vv = v.dotProduct(v); Vector w = planeIntersectionVector.subtract(getB()); double wu = w.dotProduct(u); double wv = w.dotProduct(v); double denominator = (uv * uv) - (uu * vv); //get and test parametric coordinates double s = ((uv * wv) - (vv * wu)) / denominator; if (s < 0 || s > 1) { return false; } double t = ((uv * wu) - (uu * wv)) / denominator; if (t < 0 || (s + t) > 1) { return false; } return true; } Is think that I am having some issue with my coloring. I think that it has to do with the normals of the various triangles. Here is the equation I am considering when I am building my lighting model for spheres and triangles: Now, here is the code that does this: public Color calculateIlluminationModel(Vector normal, boolean isInShadow, Scene scene, Ray ray, Vector intersectionPoint) { //c = cr * ca + cr * cl * max(0, n \dot l)) + cl * cp * max(0, e \dot r)^p Vector lightSourceColor = getColorVector(scene.getLightColor()); //cl Vector diffuseReflectanceColor = getColorVector(getMaterialColor()); //cr Vector ambientColor = getColorVector(scene.getAmbientLightColor()); //ca Vector specularHighlightColor = getColorVector(getSpecularHighlight()); //cp Vector directionToLight = scene.getDirectionToLight().normalize(); //l double angleBetweenLightAndNormal = directionToLight.dotProduct(normal); Vector reflectionVector = normal.multiply(2).multiply(angleBetweenLightAndNormal).subtract(directionToLight).normalize(); //r double visibilityTerm = isInShadow ? 0 : 1; Vector ambientTerm = diffuseReflectanceColor.multiply(ambientColor); double lambertianComponent = Math.max(0, angleBetweenLightAndNormal); Vector diffuseTerm = diffuseReflectanceColor.multiply(lightSourceColor).multiply(lambertianComponent).multiply(visibilityTerm); double angleBetweenEyeAndReflection = scene.getLookFrom().dotProduct(reflectionVector); angleBetweenEyeAndReflection = Math.max(0, angleBetweenEyeAndReflection); double phongComponent = Math.pow(angleBetweenEyeAndReflection, getPhongConstant()); Vector phongTerm = lightSourceColor.multiply(specularHighlightColor).multiply(phongComponent).multiply(visibilityTerm); } I am seeing that the dot product between the normal and the light source is -1 for the yellow triangle, and about -.707 for the blue triangle, so I'm not sure if the normal being the wrong way is the problem. Regardless, when I added made sure the angle between the light and the normal was positive ( Math.abs(directionToLight.dotProduct(normal)); ), it caused the opposite problem: I suspect that it will be a small typo/bug, but I need another pair of eyes to spot what I couldn't. Note: My triangles have vertices (a,b,c) , and the edges (u,v) are computed using a-b and c-b respectively (also, those are used for calculating the plane/triangle normal). A Vector (x,y,z) point, and a Ray is made up of a origin Vector and a normalized direction Vector . Here is how I am calculating normals for all triangles: private Vector getPlaneNormal() { Vector v1 = getU(); Vector v2 = getV(); return v1.crossProduct(v2).normalize(); } Please let me know if I left out anything that you think is important for solving these issues. EDIT: After help from @TheVee, this is what I have at then end: There are still problems with z-buffering, And with phong highlights with the triangles, but the problem I was trying to solve here was fixed. But because we know the correct normal is either what we're using or its negative, we can simply fix the cases at once by taking a preventive absolute value where a positive dot product is implicitly assumed (in your code, that's angleBetweenLightAndNormal). Some libraries like OpenGL do that for you, and on top use the additional information (the sign) to choose between two different materials (front and back) you may provide if desired. Alternatively, they can be set to not draw the back faces for solid object at all because they will be overdrawn by front faces in solid objects anyway (known as face culling), saving about half of the numerical work.
2018-06-18 06:07:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49648043513298035, "perplexity": 2499.217679777583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.11/warc/CC-MAIN-20180618051104-20180618071104-00600.warc.gz"}
https://www.esaral.com/q/describe-the-sample-space-for-the-indicated-experiment-a-coin-is-tossed-three-times-39590
# Describe the sample space for the indicated experiment: A coin is tossed three times. Question: Describe the sample space for the indicated experiment: A coin is tossed three times. Solution: A coin has two faces: head (H) and tail (T). When a coin is tossed three times, the total number of possible outcomes is $2^{3}=8$ Thus, when a coin is tossed three times, the sample space is given by: S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}
2023-03-27 13:54:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6215515732765198, "perplexity": 268.20340586853905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00073.warc.gz"}
http://mvbinary.r-forge.r-project.org/
Modelling binary variables with blocks of specific one factor distribution. Description: Site map: All the experiments are used with MvBinary 1.0 ### Introduction MvBinary models large binary data with new family of one factor distributions per independent blocks. The model provides an explicit probability of each events, thus avoiding the numeric approximations often made by the existing methods. Its interpretation is easy since each variable is described by two continuous parameters (marginal probability and strength of dependency) and by one binary parameter (kind of the dependency). Parameter estimation is performed by the inference margin procedure where the second step is achieved by an expectation-maximization algorithm. Model selection is carrying out by a deterministic approach which strongly reduces the number of competing models. This approach uses a hierarchical ascendant classification of the variables based on the empiric Cramer’s V for selecting a narrow subset of models. More technical details here ### Overview of the MvBinary functions This section performs the modelling of the Plants data set. It uses all the functions implemented in the package MvBinary and can be used as a tutorial. Data has been extracted from the USA plants database, July 29, 2015. It describes $$35 583$$ plants by indicating if they occur in 69 states (USA, Canada, Puerto Rico, Virgin Islands, Greenland and St Pierre and Miquelon). The model selection is achieved by the deterministic algorithm where the Ward criterion is used for the HAC. The EM algorithm is randomly initialized 40 times and it is stopped when two successive iterations increase the log-likelihood less than 0.01. rm(list=ls()) require(MvBinary) Loading required package: MvBinary data("plants") Model selection and parameter inference # Model selection and Parameter estimation (10 minutes) on 4 CPU cores results <- MvBinaryEstim(plants, nbcores = 4) # Summary of the resulting model summary(results) **************************************************************************************** The model contains 10 blocks Its log-likelihood is -520705 Its BIC criterion value is -521428 The model requires 138 parameters **************************************************************************************** The blocks are defined as follows Block 1 contains the following 8 variables: Alabama Florida Georgia Louisiana Mississippi North Carolina South Carolina Virginia Block 2 contains the following 8 variables: Alaska Greenland Labrador Newfoundland Northwest Territories Nunavut StPierreandMiquelon Yukon Block 3 contains the following 4 variables: Alberta British Columbia Manitoba Saskatchewan Block 4 contains the following 8 variables: Arizona Colorado Idaho Montana Nevada New Mexico Utah Wyoming Block 5 contains the following 5 variables: Arkansas Kansas Missouri Oklahoma Texas Block 6 contains the following 3 variables: California Oregon Washington Block 7 contains the following 15 variables: Connecticut Delaware District of Columbia Illinois Indiana Kentucky Maryland Massachusetts New Jersey New York Ohio Pennsylvania Rhode Island Tennessee West Virginia Block 8 contains the following 3 variables: Hawaii PuertoRico VirginIslands Block 9 contains the following 7 variables: Iowa Michigan Minnesota Nebraska North Dakota South Dakota Wisconsin Block 10 contains the following 8 variables: Maine New Brunswick New Hampshire Nova Scotia Ontario Prince Edward Island Quebec Vermont **************************************************************************************** Relevance of the modelled dependencies We show the relevance of the dependencies detected by the estimated model. Indeed, the first figure shows the correspondence between the Cramer’s V computed with the model parameter and the empiric Cramer’s V, for each pair of dependent variables according to the estimated model. Moreover, the second figure shows that the estimated model well consider the main dependencies. # Computation of the Empiric Cramer's V Vempiric <- ComputeEmpiricCramer(plants) # Computation of the model Cramer's V Vmodel <- ComputeMvBinaryCramer(results) # Matrix containing both Cramer's V Vmatrix <- data.frame(State1=rep(colnames(plants), times = ncol(plants)), State2=rep(colnames(plants), each = ncol(plants)), Vempiric=as.numeric(Vempiric), Vmodel=as.numeric(Vmodel)) # Comparison of the Cramer's V plot(Vmatrix$Vmodel[which(Vmatrix$Vmodel!=0)], Vmatrix$Vempiric[which(Vmatrix$Vmodel!=0)], xlim=c(0,1), ylim=c(0,1), xlab="Model Cramer's V", ylab="Empiric Cramer's V") lines(c(0,1),c(0,1)) boxplot(Vmatrix$Vempiric~as.factor(Vmatrix$Vmodel!=0), xlab="Dependency considered by the model", ylab="Empiric Cramer's V") Geographic coherence of the blocks The estimated model is composed by 10 blocks of dependent variable. This figure shows that this block repartition has a geographic meaning. library(maps) # maps v3.1: updated 'world': all lakes moved to separate new # # 'lakes' database. Type '?world' or 'news(package="maps")'. # library(mapproj) library(mapdata) library(raster) Loading required package: sp library(sp) library(rgdal) rgdal: version: 1.1-10, (SVN revision 622) Geospatial Data Abstraction Library extensions to R successfully loaded Loaded GDAL runtime: GDAL 1.11.3, released 2015/09/16 Path to GDAL shared files: /usr/share/gdal/1.11 Loaded PROJ.4 runtime: Rel. 4.9.2, 08 September 2015, [PJ_VERSION: 492] Path to PROJ.4 shared files: (autodetected) Linking to sp version: 1.2-3 library(rgeos) rgeos version: 0.3-19, (SVN revision 524) GEOS runtime version: 3.5.0-CAPI-1.9.0 r4084 Linking to sp version: 1.2-3 Polygon checking: TRUE library(maptools) Checking rgeos availability: TRUE ## Specify a geographic extent for the map ## by defining the top-left and bottom-right geographic coordinates mapExtent <- rbind(c(-160, 70), c(-63, 25)) ## Specify the required projection using a proj4 string ## Use http://www.spatialreference.org/ to find the required string ## Polyconic for North America newProj <- CRS("+proj=poly +lat_0=0 +lon_0=-100 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +units=m +no_defs") ## Project other layers can1Pr <- spTransform(getData('GADM', country="CAN", level=1), newProj) us1Pr <- spTransform(getData('GADM', country="USA", level=1), newProj) ## Plot each projected layer, beginning with the projected extent plot(spTransform(SpatialPoints(mapExtent, proj4string=CRS("+proj=longlat")), newProj), pch=NA) require(graphics) colo <- rainbow(max(results@blocks)) colo[c(2,5)] <- colo[c(5,2)] palette(colo) for (k in 1:max(results@blocks)){ theseJurisdictions <- names(results@blocks)[which(results@blocks==k)] if (any(can1Pr$NAME_1 %in% theseJurisdictions)) plot(can1Pr[can1Pr$NAME_1 %in% theseJurisdictions, ], border="white", col=k, add=TRUE) if (any(us1Pr$NAME_1 %in% theseJurisdictions)) plot(us1Pr[us1Pr$NAME_1 %in% theseJurisdictions, ], border="white", col=k, add=TRUE) } legend("bottomleft", legend=paste("Block", 1:max(results@blocks)), col=colo, lty=0, pch=15) plot(can1Pr[can1Pr$NAME_1 %in% can1Pr$NAME_1[5], ], border="white", plot(spTransform(getData('GADM', country="GRL", level=1), newProj), border="white", plot(spTransform(getData('GADM', country="VIR", level=1), newProj), border="white", plot(spTransform(getData('GADM', country="SPM", level=1), newProj), border="white", col=results@blocks[which(names(results@blocks)=="StPierreandMiquelon")], add=TRUE) Model interpretation Parameters permit an easy interpretation of the whole distribution. The mean per block of the values of $$\alpha_j$$ and $$\varepsilon_j$$ are summarized by the following figure. Note that the model only detects positive dependencies since for $$j=1,\ldots,d$$, $$\delta_j=1$$. Each block is composed with highly dependent variables (high values of parameters $$\varepsilon_j$$ and $$\delta_j=1$$). Therefore, the knowledge of one variables into a block provides a strong information inherent to the other variables affiliated into this block. For instance, the most dependent block is Block 10 (composed by Prince Edward Island, Nova Scotia, New Brunswick, New Hampshire, Vermont, Maine, Qu'ebec and Ontario). Thus, a plant occurs in Ontario with probability $$\alpha_{Ontario}=0.14$$ while it occurs with a probability $$0.83$$ if this plant occurs in Qu'ebec. The less dependent block is composed with tropical states (Virgin Islands, Puerto Rico and Hawaii). These weaker dependencies can be explain by a large geographic remoteness. Finally, parameters $$\alpha_j$$ allow to described the region by their amount of plants. Cold regions (Blocks 2, 3 and 10) obtains small values of $$\alpha_j$$ while the “sun-belt” obtains large values of this parameter. # A summary of the block parameters plot(NA, xlab=expression(alpha[j]), ylab=expression(epsilon[j]), xlim=c(0,0.3), ylim=c(0.65,1)) for (b in 1:max(results@blocks)){ points(mean(results@alpha[which(results@blocks==b)]), mean(results@epsilon[which(results@blocks==b)]), col=b, pch=b+10, lwd=4) text(mean(results@alpha[which(results@blocks==b)]), mean(results@epsilon[which(results@blocks==b)])+0.02, b, pch=b+14) } legend("topright", legend=paste("Block", 1:max(results@blocks)), col=colo, lty=0, pch=10+1:max(results@blocks))
2021-06-24 21:51:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3938983976840973, "perplexity": 12527.12793904974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488559139.95/warc/CC-MAIN-20210624202437-20210624232437-00208.warc.gz"}
http://libros.duhnnae.com/2017/jul7/150088283626-Midwest-cousins-of-Barnes-Wall-lattices-Mathematics-Number-Theory.php
# Midwest cousins of Barnes-Wall lattices - Mathematics > Number Theory Midwest cousins of Barnes-Wall lattices - Mathematics > Number Theory - Download this document for free, or read online. Document in PDF available to download. Abstract: Given a rational lattice and suitable set of linear transformations, weconstruct a cousin lattice. Sufficient conditions are given for integrality,evenness and unimodularity. When the input is a Barnes-Wall lattice, we getmulti-parameter series of cousins. There is a subseries consisting ofunimodular lattices which have ranks $2^{d-1}\pm 2^{d-k-1}$, for odd integers$d\ge 3$ and integers $k=1,2, ., \frac {d-1}2$. Their minimum norms aremoderately high: $2^{\lfloor \frac d2 floor -1}$. Author: Robert L. Griess Jr Source: https://arxiv.org/
2019-05-22 19:03:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6249311566352844, "perplexity": 7859.0037239819585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00497.warc.gz"}
https://gmatclub.com/forum/if-b-2-and-2x-3b-0-which-of-the-following-must-be-tru-167460.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 22 Sep 2018, 19:50 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If b < 2 and 2x - 3b = 0, which of the following must be tru Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 49303 If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 13 Feb 2014, 02:20 3 19 00:00 Difficulty: 25% (medium) Question Stats: 66% (01:09) correct 34% (01:17) wrong based on 755 sessions ### HideShow timer Statistics The Official Guide For GMAT® Quantitative Review, 2ND Edition If b < 2 and 2x - 3b = 0, which of the following must be true? (A) x > -3 (B) x < 2 (C) x = 3 (D) x < 3 (E) x > 3 Problem Solving Question: 92 Category: Algebra Inequalities Page: 73 Difficulty: 600 GMAT Club is introducing a new project: The Official Guide For GMAT® Quantitative Review, 2ND Edition - Quantitative Questions Project Each week we'll be posting several questions from The Official Guide For GMAT® Quantitative Review, 2ND Edition and then after couple of days we'll provide Official Answer (OA) to them along with a slution. We'll be glad if you participate in development of this project: 2. Please vote for the best solutions by pressing Kudos button; 3. Please vote for the questions themselves by pressing Kudos button; 4. Please share your views on difficulty level of the questions, so that we have most precise evaluation. Thank you! _________________ Math Expert Joined: 02 Sep 2009 Posts: 49303 Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 13 Feb 2014, 02:20 5 9 SOLUTION If b < 2 and 2x - 3b = 0, which of the following must be true? (A) x > -3 (B) x < 2 (C) x = 3 (D) x < 3 (E) x > 3 $$2x-3b=0$$ --> $$b=\frac{2x}{3}<2$$ --> $$\frac{2x}{3}<2$$ --> $$x<3$$. _________________ ##### General Discussion Intern Joined: 10 Apr 2012 Posts: 42 Concentration: Finance Schools: Goizueta '19 (I) WE: Analyst (Commercial Banking) Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 13 Feb 2014, 05:44 1 2x-3LT2=0, where LT means 'Less Than' 2X-LT6=0 2x=LT6 x=LT6/2 x=LT3 Intern Joined: 08 Nov 2013 Posts: 5 Location: Russian Federation Concentration: Strategy, General Management Schools: Said'16 GMAT 1: 770 Q50 V44 GPA: 3.9 WE: Corporate Finance (Energy and Utilities) Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 13 Feb 2014, 06:12 From 2x-3b=0 we have x = 3/2*b. Since b<2, it follows that x<3. If x<3, x is ALWAYS less than 2. Thus, the correct answer is B. Intern Joined: 10 Apr 2012 Posts: 42 Concentration: Finance Schools: Goizueta '19 (I) WE: Analyst (Commercial Banking) Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 13 Feb 2014, 06:48 @magneticlp : What about values between 2 and 3? Remember there are no integer restrictions on this question. Also note that its a must be true question. Director Joined: 25 Apr 2012 Posts: 698 Location: India GPA: 3.21 Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 13 Feb 2014, 11:57 1 If b < 2 and 2x - 3b = 0, which of the following must be true? (A) x > -3 (B) x < 2 (C) x = 3 (D) x < 3 (E) x > 3 Sol: given 2x-3b=0 or x =3b/2 and b<2 St 1: x>-3 lets consider possible value of b if b =-24 then x =-36 so x>-3 could be true but not must be true St2: x<2 if b=1.9 the x= 3*1.9/2 or x> 2 so this condition could be true but not must be true St3: x=3 for x =3 we need b=1 which is possible but again falls under could be true cause b can take any value Less than 2 St4:x<3 -------> or 3b/2<3 we get (3b-6)/2< 0 or 3b-6< 0 Or b< 2 which is same as the original given condition so we need not look beyond option D but for curiosity lets look at E as well St 5: x> 3 or 3b/2>3 or (3b-6)/2 >0 3b-6>0 or b>2 clearly not true Ans D Posted from my mobile device _________________ “If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.” Intern Joined: 06 Jan 2014 Posts: 41 Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 13 Feb 2014, 18:15 1 X & B DO NOT HAVE TO BE INTEGERS take a closely related number to $$2x=3b$$ $$2x=3(1.9)$$ $$\frac{(2x=5.7)}{2}$$ $$x=2.85$$ $$2.85< 3$$ Director Joined: 03 Feb 2013 Posts: 888 Location: India Concentration: Operations, Strategy GMAT 1: 760 Q49 V44 GPA: 3.88 WE: Engineering (Computer Software) Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 15 Feb 2014, 10:53 1 Bunuel wrote: The Official Guide For GMAT® Quantitative Review, 2ND Edition If b < 2 and 2x - 3b = 0, which of the following must be true? (A) x > -3 (B) x < 2 (C) x = 3 (D) x < 3 (E) x > 3 2x = 3b => 2x/3 = b < 2 => 2x/3 < 2 => x < 3 - Option D) _________________ Thanks, Kinjal My Application Experience : http://gmatclub.com/forum/hardwork-never-gets-unrewarded-for-ever-189267-40.html#p1516961 Manager Joined: 20 Dec 2013 Posts: 244 Location: India Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 16 Feb 2014, 02:45 1 Option D. We're given, b<2 Multiply both sides with 3 3b<6 Given,3b=2x 2x<6 x<3 Manager Joined: 17 Nov 2013 Posts: 113 Re: Inqualities QR2 PS 92  [#permalink] ### Show Tags 24 Apr 2016, 16:18 1 to solve this problem first you need to solve for b. You are given 2 facts, that b<2 and that 2x - 3b = 0 1) Solve for b 2x - 3b = 0 => -3b = -2x => b = -2x/-3 => b = 2x/3 2) then replace into b < 2 and solve for x 2x/3 < 2 => x < 6/2 => x < 3 3) x <3 is answer choice D! Intern Joined: 11 Nov 2014 Posts: 37 Concentration: Marketing, Finance WE: Programming (Computer Software) Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 06 May 2016, 03:51 given equation - 2x - 3b = 0; simplifying 2x = 3b x = 3b/2 x = 3/(2/b) now as b < 2 then denominator 2/b will always be greater than 1 so x will always be less than 3 hence x< 3. correct option D. EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 12432 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 13 Feb 2018, 22:19 Hi All, This question can be solved by TESTing VALUES, although you'll likely need more than one TEST to get to the solution. We're told that B < 2 and 2X - 3B = 0. We're asked what MUST be true. IF... B = 0 2X - 3(0) = 0 2X = 0 X = 0 Eliminate C and E. IF.... B = -10 2X - 3(-10) = 0 2X + 30 = 0 2X = -30 X = -15 Eliminate A. Notice how similar Answers B and D are? On a fundamental level, any number that 'fits' Answer B ALSO fits Answer D (but D includes some solutions that are NOT in B). Since there can't be two correct answers, B cannot be correct. That having been said, here's how you can prove that D is the answer... IF.... B = 1.99 2X - 3(1.99) = 0 2X - (about 6) = 0 Eliminate B. GMAT assassins aren't born, they're made, Rich _________________ 760+: Learn What GMAT Assassins Do to Score at the Highest Levels Contact Rich at: Rich.C@empowergmat.com # Rich Cohen Co-Founder & GMAT Assassin Special Offer: Save \$75 + GMAT Club Tests Free Official GMAT Exam Packs + 70 Pt. Improvement Guarantee www.empowergmat.com/ ***********************Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!*********************** Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 3515 Location: United States (CA) Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 15 Feb 2018, 11:06 1 Quote: If b < 2 and 2x - 3b = 0, which of the following must be true? (A) x > -3 (B) x < 2 (C) x = 3 (D) x < 3 (E) x > 3 If b < 2, then multiplying the inequality by 3 yields 3b < 6 Manipulating the equation, we have 2x = 3b; thus: 2x < 6 x < 3 _________________ Scott Woodbury-Stewart Founder and CEO GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Intern Joined: 03 Aug 2017 Posts: 3 Re: If b < 2 and 2x - 3b = 0, which of the following must be tru  [#permalink] ### Show Tags 19 Jul 2018, 10:06 From the question stem --> 2x-3b=0 then 2x=3b. Possible value of x=3 and b=2 BUT b<2 THEN, x<3. Anser choice D. Re: If b < 2 and 2x - 3b = 0, which of the following must be tru &nbs [#permalink] 19 Jul 2018, 10:06 Display posts from previous: Sort by # If b < 2 and 2x - 3b = 0, which of the following must be tru ## Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-09-23 02:50:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7293420433998108, "perplexity": 5009.34113296967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158958.72/warc/CC-MAIN-20180923020407-20180923040807-00456.warc.gz"}
https://docs.nvidia.com/deeplearning/modulus/user_guide/features/performance.html
# Performance¶ A collection of various methods for accelerating Modulus are presented below. The figures below show a summary of performance improvements using various Modulus features over different releases. Note The higher vRAM in A100 GPUs means that we can use twice the batch size/GPU compared to the V100 runs. For comparison purposes, the total batch size is held constant, hence the A100 plots use 2 A100 GPUs. Note These figures are only for summary purposes and the runs were performed on the flow part of the example presented in Industrial Heat Sink. For more details on performance gains due to individual features, please refer to the subsequent sections. ## Running jobs using TF32 math mode¶ TensorFloat-32 (TF32) is a new math mode available on NVIDIA A100 GPUs for handing matrix math and tensor operations used during the training of a neural network. On A100 GPUs, the TF32 feature is “ON” by default and you do not need to make any modifications to the regular scripts to use this feature. With this feature, you can obtain up to 1.8x speed-up over FP32 on A100 GPUs for the FPGA problem. This allows us to achieve same results with dramatically reduced training times (Fig. 53) without change in accuracy and loss convergence (Table 2 and Fig. 54). Case Description $$P_{drop}$$ $$(Pa)$$ Modulus: Fully Connected Networks with FP32 29.24 Modulus: Fully Connected Networks with TF32 29.13 OpenFOAM Solver 28.03 Commercial Solver 28.38 ## Running jobs using Just-In-Time (JIT) compilation¶ JIT compilation is a feature where elements of the computational graph can be compiled from native PyTorch to the TorchScript backend. This allows for optimizations like avoiding python’s Global Interpreter Lock (GIL) as well as compute optimizations including dead code elimination, common substring elimination and pointwise kernel fusion. PINNs used in Modulus have many peculiarities including the presence of many pointwise operations. Such operations, while being computationally inexpensive, put a large pressure on the memory subsystem of a GPU. JIT allows for kernel fusion, so that many of these operations can be computed simultaneously in a single kernel and thereby reducing the number of memory transfers from GPU memory to the compute units. JIT is enabled by default in Modulus through the jit option in the config file. You can optionally disable JIT by adding a jit: false option in the config file or add a jit=False command line option. ## CUDA Graphs¶ Modulus supports CUDA Graph optimization which can accelerate problems that are launch latency bottlenecked and improve parallel performance. Due to the strong scaling of GPU hardware, some machine learning problems can struggle keeping the GPU saturated resulting in work submission latency. This also impacts scalability due to work getting delayed from these bottlenecks. CUDA Graphs provides a solution to this problem by allowing the CPU to submit a sequence of jobs to the GPU rather than individually. For problems that are not matrix multiplied bound on the GPU, this can produce a notable speed up. Regardless of performance gains, it is recommended to use CUDA Graphs when possible, particularly when using multi-GPU and multi-node training. For additional details on CUDA Graphs in PyTorch, the reader is refered to the PyTorch Blog. There are three steps to using CUDA Graphs: 1. Warm-up phase where training is executed normally. 2. Recording phase during which the forward and backward kernels during one training iteration are recorded into a graph. 3. Replay of the recorded graph which is used for the rest of training. Modulus supports this PyTorch utility and is turned on by default. CUDA Graphs can be enabled using Hydra. It is suggested to use at least 20 warm-up steps, which is the default. After 20 training iterations, Modulus will then attempt to record a CUDA Graph and if successful it will replay it for the remainder of training. cuda_graphs: True cuda_graph_warmup: 20 Warning CUDA Graphs is presently a beta feature in PyTorch and may change in the future. This feature requires newer NCCL versions and host GPU drivers (R465 or greater). If errors are occurring please verify your drivers are up to date. Warning CUDA Graphs do not work for all user guide examples when using multiple GPUs. Some examples requires find_unused_parameters when using DDP, which is not compatible with CUDA Graphs. Note NVTX markers do not work inside of CUDA Graphs, thus we suggest shutting this feature off when profiling the code. ## Meshless Finite Derivatives¶ Meshless finite derivatives is an alternative approach for calculating derivatives for physics-informed learning. Rather than relying on automatic differentiation to compute analytical gradients, meshless finite derivatives queries stencil points on the fly to approximate the gradients using finite difference. With autodiff, multiple automatic differentiation calls are needed to calculate the higher-order derivatives as well as the backward pass for optimization. The trouble is that computational complexity exponentially increases for every additional autodiff pass needed, which can significantly slow training. Meshless finite derivatives replaces the need for autodiff with additional forward passes. Since the finite difference stencil points are queried on demand, no grid discretion is needed preserving mesh free training. For many problems, the additional computation needed for the foward passes in meshless finite derivatives is far less than the autodiff equivalent. This approach can potentially yield anywhere from a $$2-4$$ times speed-up over the autodiff approach with comparable accuracy. To use meshless finite derivatives, one just needs to define a MeshlessFiniteDerivative node and add it to a constraint that will require gradient quantities. Modulus will prioritize the use of meshless finite derivatives over autodiff when provided. When creating a MeshlessFiniteDerivative node, the derivatives that will be needed must be explicitly defined. This can be done though just a list, or accessing needed derivatives from other nodes. Additionally, this node requires a node that has the inputs consist of the independent variables and output being the quantities derivatives are needed for. For example, the derivative $$\partial f / \partial x$$ with require a node with input variables that contain $$x$$ and outputs $$f$$. Switching to meshless finite derivatives is straight forward for most problems. As an example, for LDC the following code snippet turns on meshless finite derivative providing a $$3$$ times speed-up: from modulus.eq.derivatives import MeshlessFiniteDerivative # Make list of nodes to unroll graph on ns = NavierStokes(nu=0.01, rho=1.0, dim=2, time=False) flow_net = instantiate_arch( input_keys=[Key("x"), Key("y")], output_keys=[Key("u"), Key("v"), Key("p")], cfg=cfg.arch.fully_connected ) flow_net_node = flow_net.make_node(name="flow_network", jit=cfg.jit) # Define derivatives needed to be calculated # Requirements for 2D N-S derivatives_strs = set(["u__x", "v__x", "p__x", "v__x__x", "u__x__x", "u__y", "v__y", \ "p__y", "u__y__y", "v__y__y"]) derivatives = Key.convert_list(derivatives_strs) # Or get the derivatives from the N-S node itself derivatives = [] for node in ns.make_nodes(): for key in node.derivatives: derivatives.append(Key(key.name, size=key.size, derivatives=key.derivatives)) # Create MFD node mfd_node = MeshlessFiniteDerivative.make_node( node_model=flow_net_node, derivatives=derivatives, dx=0.001, max_batch_size=4*cfg.batch_size.Interior, ) nodes = ns.make_nodes() + [flow_net_node, mfd_node] Warning Meshless Finite Derivatives is a development from the Modulus team and is presently in beta. Use at your own discretion; stability and convergence is not garanteed. API subject to change in future versions. ### Present Pitfalls¶ • Setting the dx parameter is a very critical part of meshless finite derivatives. While classical numerical methods offer clear guidance on this topic, these do not directly apply here due additional stability constraints placed by the backwards pass and optimization. For most problems in our user guide a dx close to 0.001 works well and yields good convergence, lower will likely lead to instability during training with a float32 precision model. Additional details, tools and guidance on the specification of dx will be forthcoming in the near future. • Meshless finite derivatives can increase the noise during training compared to automatic differentiation due its approximate nature. Thus this feature is currently not suggested for problems that are exhibit unstable training characteristics for automatic differentiation. • Meshless finite derivatives can converge to the wrong solution and accuracy is highly dependent on the dx used. • Performance gains are problem specific and is based on the derivatives needed. Presently the best way to further increase the performance of meshless finite derivatives, users should increase max_batch_size when creating the meshless finite derivative node. • Modulus will add automatic differentiation nodes if all required derivatives are not specified to the meshless finite derivative. ## Running jobs using multiple GPUs¶ To boost performance and to run larger problems, Modulus supports multi-GPU and multi-node scaling. This allows for multiple processes, each targeting a single GPU, to perform independent forward and backward passes and aggregate the gradients collectively before updating the model weights. The Fig. 55 shows the scaling performance of Modulus on the laminar FPGA test problem (script can be found at examples/fpga/laminar/fpga_flow.py) up to 1024 A100 GPUs on 128 nodes. The scaling efficiency from 16 to 1024 GPUs is almost 85%. This data parallel fashion of multi-GPU training keeps the number of points sampled per GPU constant while increasing the total effective batch size. You can use this to your advantage to increase the number of points sampled by increasing the number of GPUs allowing you to handle much larger problems. To run a Modulus solution using multiple GPUs on a single compute node, one can first find out the available GPUs using nvidia-smi Once you have found out the available GPUs, you can run the job using mpirun -np #GPUs. Below command shows how to run the job using 2 GPUs. mpirun -np 2 python fpga_flow.py Modulus supports running a problem on multiple nodes as well using a SLURM scheduler. Simply launch a job using srun and the appropriate flags and Modulus will set up the multi-node distributed process group. The command below shows how to launch a 2 node job with 8 GPUs per node (16 GPUs in total): srun -n 16 --ntasks-per-node 8 --mpi=none python fpga_flow.py Modulus also supports running on other clusters that do not have a SLURM scheduler as long as the following environment variables are set for each process: • MASTER_ADDR: IP address of the node with rank 0 • MASTER_PORT: port that can be used for the different processes to communicate on • RANK: rank of that process • WORLD_SIZE: total number of participating processes • LOCAL_RANK (optional): rank of the process on it’s node
2022-09-28 23:23:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33775633573532104, "perplexity": 2358.216916312867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00531.warc.gz"}
https://physics.stackexchange.com/questions/102486/how-can-i-destroy-earth-with-physics
# How can I destroy earth with physics? [closed] I want to destroy the whole earth using physics, I would like to learn some of the ways that can be used to achieve this. I tried using a nuclear bomb but it takes so long, and I can't wait that much: Why does it take so long to make a nuclear bomb? What are some physics experiments and theories to help me destroy the earth? Note that I have all the money and privileges I need. ## closed as unclear what you're asking by Kyle Kanos, Brandon Enright, user10851, David Z♦Mar 8 '14 at 3:08 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. • Send a space crew to divert an asteroid onto the earth. – Isopycnal Oscillation Mar 7 '14 at 20:00 • I don't think you need physics to do this. Politics is doing it just fine. – jerk_dadt Mar 7 '14 at 20:24 • You'll need a Death Star – Kyle Kanos Mar 7 '14 at 20:29 • If you can't wait that much maybe you don't have the willpower to destroy earth anyway – Claudiordgz Mar 7 '14 at 22:42 • Last December, I received a text message from the USGS which said there had been an earthquake near Polson, Montana with a magnitude of 22.0. My first thought was that the Solar System would have a new asteroid belt, but my calculations indicated that there probably wouldn't be enough left to form one; the energy released would be about 250,000 times Earth's gravitational binding energy, more than enough to vaporize the planet. Unfortunately for your evil schemes, a followup message revised the magnitude to 2.2. – Keith Thompson Mar 8 '14 at 2:41 Earth's gravitational binding energy is $-1.711×10^{32}~\mathrm{J}$ or $4.09×10^{13}$ gigatons. The Tsar Bomba massed $27$ tonnes to deliver $0.057$ gigatons. Do the math for Earth disassembly by bomb. Substituting depleted uranium for the used lead tamper will double the yield. Earth orbital speed averages 30 km/s and it masses $5.97×10^{24}~\mathrm{kg}$, so $mv^2 / 2 = 2.7×10^{33}~\mathrm{J}$. Rather than going for a messy asteroid impact and endangering Venusian hortas with debris infall, drop the Earth into the Sun, $t = 64.56$ days. Patience. The Earth's rotational energy is about $2.13 \times 10^{29}~\mathrm{J}$ (non-homogeneous sphere). You'll need obtain the orbital stopper energy elsewhere.
2019-09-19 04:24:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5674512982368469, "perplexity": 1167.0694752149502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573439.64/warc/CC-MAIN-20190919040032-20190919062032-00154.warc.gz"}
https://www.physicsforums.com/threads/nothing-can-exceed-the-speed-of-light.94419/
# Nothing can exceed the speed of light 1. Oct 12, 2005 ### cdm1a23 Hi. I have a couple questions about relativity. I hear that nothing can exceed the speed of light, but then I hear that all motion is relative. When one beam of light goes past another beam of light going in the opposite direction, aren't they moving at 2c with respect to each other? And if I shine a flashlight and walk in the opposite direction, aren't I traveling over c relative to the light? And when any light occurs at all, isn't everything else travelling at the speed of light relative to it? If this is not correct, does relativity mean that nothing can travel faster than light from any one reference frame's point of view? How do you determine what an object's speed is? What is the ultimate reference frame for something's speed to be "relative to" because everything in the universe is travelling at c compared to something else??? Thanks to anyone who can help! 2. Oct 12, 2005 ### mezarashi Relativity works in weird ways. It's not as weird as quantum physics, but it leads to accurate predictions. Suppose your friend is on a train passing by you at v, shining a flash light at you. You will see the light coming at you at c!!! As nothing can go faster than c, even light itself cannot violate this. But your friend on the train will argue that he is also seeing the light going towards you at c! The consequence of this is time dilation. He thinks your clock is going a bit slow and you think his clock is going a bit slow. The dilation will perfectly make up so that everybody will measure light at c. 3. Oct 12, 2005 ### cdm1a23 Weird... Does that mean that no matter what you are doing or what is going on, you will see light travelling at c, and someone else who is travelling at a different speed in a different way will see the same beam of light going at the same speed in the same direction? 4. Oct 12, 2005 ### mezarashi Indeed. As I've said. The rule that the speed of light is constant in all reference frames is so divine that even time will bend to serve this rule. 5. Oct 12, 2005 ### cdm1a23 hmm... I have to digest this... Thanks for the info... I'll be back on tomorrow if you or anyone else has any additional comments on any of my questions. Thanks Again! 6. Oct 12, 2005 ### stmoe I'm not an expert or anything, but from what I understand: Time is distorted to keep light at its speed, so If you were traveling the speed of light.. or near it in the opposite direction of another beam of light ... or any EMR for that matter.. then time would almost stop for you so that the beam of light would still be traveling at c. .. and if you could go faster.. then please let me know that the answer from #4 on is A on the econ test I took a few years ago. 7. Oct 12, 2005 ### brookstimtimtim Hummm, So what happens when I setting outside and I'm looking at the North star and I turn around as fast as can to look at a star in the south. Could I say that because everything is relative that I did not move but everything else did and if so would that movement not appare to FTL? 8. Oct 12, 2005 ### brookstimtimtim (along with above here is a copy of another post I had made) You know I heard allot of this before and will I'm just not buying it. Why is everyone so quick to ftl is impossible. I see a one problem with Einstein theory. If mass changes with the speed of an object, then is mass a virable? With out know what interaction makes this happen I think it is a bit much to say FTL is not possible. If one day we find the cause of this change in mass it might be possible to change the mass of an object to nothing, Einstein own theory say mass is not a constain nor is time. Would this not mean once we have a better understand of both, that both time and mass could be changed to make FTL possible, or do we keep doing as we have been doing, and say everything impossible? Another good one, is the people who says there no way to get around gravity or there not antigravity, they are sure that it is impossible yet nobody know what cause gravity. How do you know it is impossible when you don't why is possible? 9. Oct 12, 2005 ### mezarashi There are "things" that certainly do go faster than light. A shadow for example is not restricted by the speed of light. Or shining a laser onto the moon and then quickly changing the angle of the laser we can make the beam on the moon appear to be moving faster than light. At the end of the day however nothing is actually going faster than light. There are many more such phenomena. Search google for "faster than light". The conclusion is, we are unable to communicate anything meaningful faster than light. That's what it really means. 10. Oct 12, 2005 ### JesseM Relativity says that nothing can have a velocity greater than that of light in a given reference frame, and likewise that light always travels at c in a given reference frame. But it is quite possible that, in a given reference frame, the distance between two objects whose individual velocities are less than c will be increasing at a rate greater than c--for example, if one object is moving at 0.8c to your left, and the other is moving at 0.8c to your right, then in your frame the distance between them is increasing at a rate of 1.6 light years per year. But, if you transform into the rest frame of one of these objects, then in this object's frame the second object will not be moving at 1.6c in this frame--instead you must use the formula for addition of relativistic velocities, $$(u + v)/(1 + uv/c^2)$$, to find that in this frame the second object will be moving at (0.8c + 0.8c)/(1 + 0.64) = about 0.9756c. So the light-speed limit is about the individual velocity of any object in a single reference frame, not about the rate that the distance between multiple objects is growing or shrinking in a single reference frame. 11. Oct 12, 2005 ### JesseM Special relativity does not say all motion is relative, only inertial motion (motion that doesn't involve acceleration) is. If we are moving apart at constant velocity v, then we can look at things either from a frame where I am at rest and you are moving at velocity v, or a frame where you are at rest and I am moving at velocity v. On the other hand, if you are orbiting around me and I am not accelerating, it is not equally valid in SR to say that you are at rest and I am orbiting around you--the question of who is accelerating and who is not is an objective one, because the person who accelerates will feel G-forces (the centrifugal force, in the case of circular motion) while the one who doesn't will not. 12. Oct 13, 2005 ### Nomy-the wanderer We always need to have a constant don't we 13. Oct 13, 2005 ### Janus Staff Emeritus The cause for this change of "mass"* is the input of energy needed to change the velocity of your object. A simplified way of thinking about it is like this: As you add energy to an object to accelerate it, that energy adds inertia of its own to the object, thus making it even harder to make further increases in the objects speed. This in turn increases the amount of energy you need to supply in order to make further increases in the objects speed (Just as if the object had gained mass). But this further increase of added energy itself adds inertia, etc. etc. The upshiot becomes that the total amount of energy needed to get an object up to a certain speed(relative to yourself) approaches infinity as the objects speed approaches the speed of light. * I put mass in quotes here because there is some debate in convention as to whether it should be strictly considered as mass 14. Oct 13, 2005 ### brookstimtimtim Let me rephase what I was saying above. In our current understand in physics FTL travel is not possible, but our current understanding is limited due the fact we have very little understanding into origin of mass, gravity, and time, and until such time as we do have this understanding I think it is bold to make statements and call them law without knowing all the facts. I personal think we don't haft as much stuff as we think we do. I remember in school the model of the atom and sub atomic particals where not in the picture, now they have named them. I kind of like the Bertrand Russell saying above. 15. Oct 13, 2005 ### cdm1a23 So what happens if you are travelling at c/2 and you pass various space ships travelling at velocities of different parts of c, both travelling toward and away from you? Is your time different compared to all of these people depending on how you pass? Would you "see" some ships with people who are aging much faster than you, and others who may pass you that are not aging at all really? Doesn't it seem kind of strange that merely being in motion compared to something else affects the passage of time? Or does it have more to do with the amount of energy that object contains? I've heard it said that as you approach the speed of light (I still don't know in relation to what) you gain mass...??? what is this mass... is it hydrogen?plutonium?... probably einsteinium! :) seriously though, do you actually gain mass, and does it go away as you slow down again? Also how do you reach the speed of light, because no matter what speed you are going, there are going to be somethings that are coming toward you, and others that are going away from you, so how do you judge when you've reached the speed of light??? 16. Oct 13, 2005 ### Staff: Mentor Yes. Yes, but you don't need Einstein's Relativity for that - the delay caused by the fact that th distance is changing affects your ability to watch what's going on on the other ship. The first time I heard it, yes. But the evidence is so overwealming that it really isn't up for question - you get used to it. And besides - a lot of principles in physics, technology, etc. seem like magic the first time a person hears about them. That's ok, as long as you keep an open mind about learning new things. You can use a beam of light to measure your speed relative to any object you choose. You can even measure your speed relative to the beam of light (it'll always be zero). 17. Oct 13, 2005 ### G01 The mass isn't "matter" in the conventional sense. Its inertia thats given by the energy added to the object to make it speed up. You don't actually gain any more atoms to your body you just have more inertia and mass is really a measure of inertia 18. Oct 14, 2005 ### cdm1a23 Thanks Very Much but Still a lingering question Thanks to everyone for the responses, but regarding the quote above... What I meant was, if you can't go light speed, but you can get very close, then how do you determine which object "counts" as an object to measure your velocity to? For example, I am going light speed right now compared to the light that is coming past me from my monitor or lamp... in fact relative to those two things I am going light speed in two different directions! So what I am asking is, if I am going .9999c compared to the earth, and then a superfast rocket goes by me in the other direction at .9999c compared to earth, then I am going .9999c (compared to the earth) in one direction and the rocket is going .9999c (compared to the earth) in the other, so what happens? Aren't we both traveling nearly twice the speed of light in relation to each other since we are both seeing the earth disappear behind us at .9999c? 19. Oct 14, 2005 ### JesseM In your own reference frame, light always moves at the same speed. In other words, even if I observe you chasing a light beam at 0.8c in my frame, in your frame you will not observe that light beam moving away from you at only 0.2c, you'll still observe it moving at 1c in your own frame. Light actually doesn't have its own reference frame in relativity, because if it did, it would violate the rule that the laws of physics must work the same in every reference frame (so if there are some frames where light moves at c, it must move at c in all valid reference frames). So, there's no inertial frame where light is at rest and you are moving at c. Nope, even though in the earth frame you are both moving at 0.9999c in opposite directions, in your frame the earth is moving away from you at 0.9999c but the other ship is not moving at 2*0.9999c...again, you have to use the formula for addition of velocities in relativity, $$(u + v)/(1 + uv/c^2)$$, which in this case tells you that in your frame, the other ship will be moving at (0.9999c + 0.9999c)/(1 + 0.9999^2) = 1.9998c/1.99980001 = 0.999999995c away from you, in the same direction the earth is going. 20. Oct 15, 2005 ### cdm1a23 Starting to get it... Thanks very much JesseM. But I may have more questions to come!
2017-02-24 06:13:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4533723294734955, "perplexity": 415.6566029172257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00510-ip-10-171-10-108.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/98640/how-to-parallelize-nested-do-loops
# how to parallelize nested do loops [closed] I have four Do loops which the outer loops iterate over the inner loops. This is my code jjp = 0; terms = 2; la = Table[0, {i, 1, 8}, {j, 1, 8}]; SetSharedVariable[la, ii] Do[Do[m = -n + (jj - 1); jp = jjp + jj; iip = 0; Do[ParallelDo[u = -v + (ii - 1); ip = iip + ii; Print[jp, ip, KroneckerDelta[n, v] KroneckerDelta[m, u]]; la[[jp, ip]] = KroneckerDelta[n, v] KroneckerDelta[m, u];, {ii,2v+1}]; iip = ip;, {v, terms}];, {jj, 2 n + 1}]; jjp = jp;, {n, terms}]; la // MatrixForm with above code I will got following result {{0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 1, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 1, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 1}} But the result should be the following {{1, 0, 0, 0, 0, 0, 0, 0}, {0, 1, 0, 0, 0, 0, 0, 0}, {0, 0, 1, 0, 0, 0, 0, 0}, {0, 0, 0, 1, 0, 0, 0, 0}, {0, 0, 0, 0, 1, 0, 0, 0}, {0, 0, 0, 0, 0, 1, 0, 0}, {0, 0, 0, 0, 0, 0, 1, 0}, {0, 0, 0, 0, 0, 0, 0, 1}} can anyone help me with this? • Your question title doesn't match your question. Please consider re-titling. As for your actual question: without knowing what you're trying to do, we can't answer the question, because there is no context given for the correct result. Obviously we can't fix what you've done unless we know why the result should be the second list rather than the actual output from the code. – march Nov 4 '15 at 16:30 • So you're trying to make the identity matrix? Why not just do IdentityMatrix[8]? Or is this a simple example of what you're trying to do, and you're trying to figure out why the code above doesn't work so that you can apply to your more complex problem? A couple of things: (1) you don't need to nest Do loops: see the docs for Do; (2) if you tell us what you're actually trying to do, we might come up with a better way (usually, in Mathematica, you don't need loops); (3) there are lots of useful built-ins like IdentityMatrix: get cozy with docs and learn! – march Nov 4 '15 at 16:52 • Replacing ParallelDo with Do will give the expected Identity Matrix. If my understanding of the question is correct, the question is why ParallelDo doesn't give the same result that Do does. My guess is that there could be some initialization/kernel communication mixup issues. Notice that adding the line ip=3; in the loop makes both Do and ParallelDo return the expected Identity Matrix. Do[Do[m = -n + (jj - 1); jp = jjp + jj; iip = 0; ip = 3;Not quite sure though. – Lotus Nov 4 '15 at 16:55 • @Lotus. Aha! Now that makes more sense. I could not figure out what the question was (clearly from my comments). OP: please consider re-writing the text a little to make it explicit that the code works if you use Do but doesn't if you use ParallelDo. – march Nov 4 '15 at 16:58 • Try debugging with this Print variant: Print[{m, n, u, v, ii, jj, iip, jjp, ip, jp, KroneckerDelta[n, v] KroneckerDelta[m, u]}]. What you should see is that the changes to ii need not be ordered (that's the point of parallelizing) and this creates behavior differences for values that depend on ii. – Daniel Lichtblau Nov 4 '15 at 17:12 This variant seems to work. jjp = 0; terms = 2; la = ConstantArray[0, {8, 8}]; Do[ Do[ m = -n + (jj - 1); jp = jjp + jj; iip = 0; Do[ SetSharedVariable[la]; ParallelDo[ privateu = -v + (ii - 1); privateip = iip + ii; Print[{m, n, privateu, v, ii, jj, iip, jjp, privateip, jp, KroneckerDelta[n, v] KroneckerDelta[m, privateu]}]; la[[jp, privateip]] = KroneckerDelta[n, v] KroneckerDelta[m, privateu], {ii, 2 v + 1}, DistributedContexts -> "Global"]; iip = iip + 2 v + 1, {v, terms}] , {jj, 2 n + 1}]; jjp = jp, {n, terms}]; la Salient differences: (1) Put u and ip into non-distributed contexts so there isn't "cross-talk" between kernels about their values. (2) I changed the iip update following the innermost loop to add 2v+1. This has the effect of making it behave independently of ordering of the inner loop. • Thanks for try to improve the code. But if you change terms = 2; la = ConstantArray[0, {8, 8}]; withterms = 10; la = ConstantArray[0, {terms (terms + 2), terms (terms + 2)}]; to generate the bigger matrix I will get by AbsoluteTiming for above modified code {40.88, Null} while for my code posted above without ParallelDo I will get {0.0738672, Null}. – user35323 Nov 5 '15 at 7:33 • The slowness of ParallelDo vs Do in this example is not new. It was already manifest in your original code. You asked how to correct the parallelized code. Making it faster is a very different matter. For that you might try parallelizing the outermost loop instead of innermost. Less communication overhead that way. – Daniel Lichtblau Nov 5 '15 at 16:46 • Thanks. The idea for using ParallelDo was to make it as much as possible faster. But it seems I did it in wrong way because I am not so familiar with parallelization. This is my try but doesn't work terms = 10;ParallelEvaluate[jjp = 0]; la = ConstantArray[0, {terms (terms + 2), terms (terms + 2)}]; SetSharedVariable[la];ParallelDo[Do[m=-n+(jj-1);jp=jjp+jj; iip = 0; Do[Do[privateu=-v+(ii-1);privateip=iip+ii;la[[jp,privateip]] = KroneckerDelta[n,v] KroneckerDelta[m,privateu],{ii,2v+1}];iip=iip+2 v+1,{v,terms}],{jj,2n+1}];jjp=jjp+2n+1,{n,terms},DistributedContexts->"Global"];` – user35323 Nov 6 '15 at 10:56 • I hope someone can give better guidance on how to parallelize the outer loop to get both speed and correctness. I also lack the expertise; I had enough trouble working out the correctness issues with the previous version. – Daniel Lichtblau Nov 6 '15 at 16:13
2019-12-09 04:21:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1884385049343109, "perplexity": 1262.3702367629444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517557.43/warc/CC-MAIN-20191209041847-20191209065847-00455.warc.gz"}
http://math.stackexchange.com/questions/137118/constructing-finite-state-automata-corresponding-to-regular-expressions-are-my?answertab=oldest
# Constructing finite state automata corresponding to regular expressions. Are my solutions correct? I have drawn my answers in paint, are they correct? (4c) For the alphabet {0, 1} construct finite state automata corresponding to each of the following regular expressions: (i) 0 My Answer 4ci (ii) 1 | 0 My Answer 4cii (iii) 0 * (1 | 0) My Answer 4cii - Your 4ciii solution can be much simpler. Hint -- construct an automaton for $0^*$. Then think how to convert that into an automaton for $0^*(1|0)$, realizing there are two cases to that, and you can apparently use $\epsilon$ moves. –  David Lewis Apr 26 '12 at 7:23 ## 1 Answer i This works, but why do you bother with the arrow labelled 1? It appears that you are not requiring your automata to be complete, and so you can eliminate every state that doesn't lie on a path to a final state. ii This is fine. iii As David Lewis commented, this is much more complicated than necessary. Look at each of your $\epsilon$-transitions and consider whether it is really achieving anything. Some of them do have a purpose, but most of them don't. The tidiest automaton for this language doesn't have any $\epsilon$-transitions. Your automaton does recognise the right language, though. - When I saw the solution to (iii), it appeared to me that Danny Rancher had applied the conventional algorithm for converting a regular expression into an NFA with ϵ-transitions. The algorithm doesn't produce the simplest possible automata, but it does produce automata in a recognizably correct form. –  MJD Apr 26 '12 at 12:04 @MarkDominus: Right, that is very likely. –  Tara B Apr 26 '12 at 12:12
2015-01-27 13:07:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.810004472732544, "perplexity": 486.1712743214277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120842874.46/warc/CC-MAIN-20150124173402-00021-ip-10-180-212-252.ec2.internal.warc.gz"}
http://s141453.gridserver.com/do-index-yxivp/a139c0-simple-radical-form-calculator
Categorías My son had used it and he has shown tremendous improvement in this subject. Step-by-step explanations are provided for each calculation. This calculator will solve your problems. The calculator works for both numbers and expressions containing variables. It can rationalize denominators with one or two radicals. It accepts inputs of real numbers for the radicand. For example, the square roots of 9 are -3 and +3, since (-3) 2 = (+3) 2 = 9. Simplifying Radical Expressions Calculation. Alegremath.com gives helpful tips on simplest radical form calculator, arithmetic and logarithmic and other algebra subject areas. "Note that any positive real number has two square roots, one positive and one negative. Here you can enter two known sides or angles and calculate unknown side ,angle or area. Free Radicals Calculator - Simplify radical expressions using algebraic rules step-by-step This website uses cookies to ensure you get the best experience. The Simplest Form Calculator is a free online tool that displays the simplified form of the given fraction. How it works: Just type numbers into the boxes below and the calculator will automatically calculate the distance between those 2 points. A simple online simplest radical form calculator to find the simplest radical form for a given number. If ever you will need advice on multiplying or perhaps equations in two variables, Rational-equations.com is truly the right destination to take a look at! Thank you. However, still no word problems, pre-calc, calc. This tutorial demonstrates how to express answers in simplest radical form. Here we answer "What is the square root of 145 (√145) in simplest radical form?" Solving addition and subtraction equations worksheets, simplifying +exponents +fractions +reduce, general aptitude questions with methods to solve, QUADRATIC EQUATIONS; FACTORING CALCULATOR, how to identify quadratic and linear equations, simplifying expressions negative exponents worksheet printable. Please use this form if you would like to have this math solver on your website, free of charge. Here we answer "What is the square root of 34 (√34) in simplest radical form?" Therefore, the answer is: √ 51 Simplest Radical Form Calculator The term underneath the radical symbol is called the radicand. Just enter a number as input and click on calculate to get the result within the blink of an eye. Here you can enter any square root and we will convert it to its simplest radical form. The square root calculator below will reduce any square root to its simplest radical form as well as provide a brute force … Try the Free Math Solver or Scroll down to Resources! We know that a radical expression is in its simplest form if there are no more square roots, cube roots, 4th roots, etc left to find. Underneath the calculator, six most popular trig functions will appear - three basic ones: sine, cosine and tangent, and … In elementary algebra, the quadratic formula is a formula that provides the solution(s) to a quadratic equation. The square root of 34 in its simplest form means to get the number 34 inside the radical √ as low as possible. Here we answer "What is the square root of 51 (√51) in simplest radical form?" The calculators and solvers below do more than just calculate the answer. Mathpoint.net includes great facts on simplest radical form calculator free, quadratic function and precalculus and other math subject areas. This calculator simplifies ANY radical expressions. In cases where you need to have guidance on mixed numbers or even synthetic division, Mathpoint.net is always the excellent site to visit! Simplest Radical Form Calculator: Use this online calculator to find the radical expression which is an expression that has a square root, cube root, etc of the given number. Calculator Use. The square root of 145 in its simplest form means to get the number 145 inside the radical √ as low as possible. How to enter numbers: Enter any integer, decimal or fraction. Example 1: to simplify (2 −1)(2 + 1) type (r2 - 1) (r2 + 1). Kevin Porter, TX. The free calculator will reduce any number to its principal nth root as well as express it in simplest radical form. Download free on Google Play. Use the rule to convert to a radical, where , , and . Thanks for the quick reply. To calculate any root of a number use our Nth Root Calculator.For complex or imaginary solutions use Simplify Radical Expressions Calculator. Visit Mathway on the web. After the support and love of both my mother and father, I think wed all agree that I owe the rest of my success as a student to your software. It really is remarkable! The square root of 51 is already in the simplest radical form. ... {21}\) feet up the wall. (Please tell me that you are working on it - who is going to do my homework when I am past College Algebra?!? It was a good decision! Roots are often written using the radical symbol √. By … This calculator also simplifies proper fractions by reducing to lowest terms and showing the work involved. The square root of 145 is already in the simplest radical form. To use it, replace square root sign ( √ ) with letter r. Example: to rationalize $\frac{\sqrt{2}-\sqrt{3}}{1-\sqrt{2/3}}$ type r2-r3 for numerator and 1-r(2/3) for denominator. This online calculator is a quadratic equation solver that will solve a second-order polynomial equation such as ax 2 + bx + c = 0 for x, where a ≠ 0, using the quadratic formula. If you would like a lesson on solving radical equations, then please visit our lesson page . Download free on Amazon. There will be a superscript number on the 'V-shaped part of the symbol', while representing any other root other than square root. Step 2: Click the blue arrow to submit and see the result! T.P., Wyoming, I never bought anything over the internet, until my neighbor showed me what the Algebrator can do, and I ordered one right away. In case that you will need help on basic mathematics or maybe algebra 1, Mhsmath.com is without question the perfect site to explore! get Go. Like reducing a fraction to lowest terms, you should always look to factor out a perfect square when possible. The nth root calculator below will also provide a brute force rounded approximation of the principal nth root. It can be used to describe a cube root, a fourth root, or higher. Here we answer "What is the square root of 73 (√73) in simplest radical form?" Enter Number: Result: Simplest Radical Form: X√ Mathway. Therefore, the answer is: √ 34 Simplest Radical Form Calculator It solves any algebra problem from your book . Free trigonometric simplification calculator - Simplify trigonometric expressions to their simplest form step-by-step This website uses cookies to ensure you get the best experience. Use this calculator to find the fourth root of a number. The square root of 73 in its simplest form means to get the number 73 inside the radical √ as low as possible. The radical is in simplest form when the radicand is not a fraction. To find the trigonometric functions of an angle, enter the chosen angle in degrees or radians. The square root of 51 in its simplest form means to get the number 51 inside the radical √ as low as possible. Convert improper fractions to mixed numbers in simplest form. James Mathew, CA, I got 95% on my college Algebra midterm which boosted my grade back up to an A. I was down to a C and worried when I found your software. To read our review of the Math Way -- which is what fuels this page's calculator, please go here . The expression $$5\sqrt{2}$$ is said to be in simple radical form. Free radical equation calculator - solve radical equations step-by-step This website uses cookies to ensure you get the best experience. Just enter a number as input and click on calculate to get the result within the blink of an eye. Simple Radical Form. ... , LCM chart for Algebra, simplified radical form calculator, easy way to convert fractions to decimals. Simple interest worksheets for 7th grade, pythagoras calculation, one variable expressions worksheet, two unknown maths question. Therefore, the answer is: √ 73 Simplest Radical Form Calculator In cases where you require guidance on the quadratic formula or maybe functions, Algebra1help.com is without a doubt the right place to explore! If possible, always factor out a perfect square. P.K., California, If it wasnt for Algebrator, I never would have been confident enough in myself to take the SATs, let alone perform so well in the mathematical section (especially in algebra). In cases where you have to have advice on exponential and logarithmic or perhaps lines, Alegremath.com is really the perfect site to check-out! In this radical calculator enter the whole number and get Simplest Radical Form. Worksheet Simplifying Radicals The free calculator will solve any square root, even negative ones and you can mess around with decimals too! Radical Expression: A radical expression is defined as any mathematical expression containing a radical (√) symbol. This online calculator is set up specifically to calculate 4th root. The Fraction Calculator will multiply fractions and reduce the fraction to its simplest form. Code to add this calci to your website I have the chance to go to college, something no one in my family has ever done. Convert to Radical Form 100^(1/2) If is a positive integer that is greater than and is a real number or a factor , then . Fractions should be entered with a forward such as '3/4' for the fraction $$\frac{3}{4}$$. R.B., Kentucky, If anybody needs algebra help, I highly recommend 'Algebrator'. The simplification calculator allows you to take a simple or complex expression and simplify and reduce the expression to it's simplest form. The calculator solution will show work using the quadratic formula to … better graphing, wizards. The square root of 34 is already in the simplest radical form. I possibly could help you if you can be more specific and provide details about radical form calculator. Download free on iTunes. Square roots is a specialized form of our common roots calculator. Trig calculator finding sin, cos, tan, cot, sec, csc. So far found demonstrates how to enter numbers: enter any square of. Showing the work involved root, a fourth root of 145 is already in the simplest form means to the... I learned you to take a simple online simplest radical form for a given number just enter a number 34! Or perhaps lines, alegremath.com is really the perfect site to visit, tan, cot,,. Expressions worksheet, two unknown maths question tutorial demonstrates how to express answers in simplest radical form square root 73... Low as possible is always the excellent site to explore just enter a number use nth., please go here alegremath.com gives helpful tips on simplest radical form calculator Welcome to Cookie! A costly algebra tutor Math subject areas ok here is what I learned do than... He has shown tremendous improvement in this radical calculator enter the whole number and get simplest radical form for given. Simple interest worksheets for 7th grade, pythagoras calculation, one positive one. What I like: much friendlier interface, coverage of functions, Algebra1help.com is without a the. Often written using the radical √ as low as possible, Mhsmath.com without. Calculator below will also provide a brute force rounded approximation of the given fraction college, something no one my... To our Cookie Policy arrow to submit and see the result simplifies proper fractions by reducing to lowest terms showing... Rationalize denominators with one or two Radicals trigonometric functions of an eye in cases where you have to have Math. Of an eye Algebra1help.com is without a doubt the right place to explore son had it. Tool that displays the simplified form of our common roots calculator Math solver Scroll... Form for a given number trig calculator finding sin, cos, tan cot! This radical calculator enter the whole number and get simplest radical form 73 ( √73 in... Fuels this page 's calculator, arithmetic and logarithmic or perhaps lines, alegremath.com is really the site! Of 51 is already in the simplest radical form calculator to find the fourth root of 34 its! As express it in simplest form when the radicand fourth root of a number use our nth root type into. Number has two square roots is a free online tool that displays the simplified of. Into the boxes below and the calculator will automatically calculate the answer is: √ 34 simplest form. Or radians convert to a radical ( √ ) symbol our nth root calculator below will also provide a force... 2 points radical form for a given number numbers into the boxes and. Sec, csc result within the blink of an eye this subject more specific and provide details about radical.. A simple or complex expression and and algebraic equation shown tremendous improvement this. The chosen angle in degrees or radians how to enter numbers: enter any square root of a as! When the radicand have this Math solver on your website, you should always look to factor out a square... For 7th grade, pythagoras calculation, one variable expressions worksheet, two unknown maths question diffrence. Algebraic equation interface, coverage of functions, trig to the radical √ low. If anybody needs algebra help, I highly recommend 'Algebrator ' of simplified! Will need help on basic mathematics or maybe algebra 1, Mhsmath.com is without question perfect... Or Scroll down to Resources the wall a free online tool that displays the simplified of! Into the boxes below and the calculator works for both numbers and expressions containing variables or solutions... Friendlier interface, coverage of functions, Algebra1help.com is without a doubt the right place to explore,... Your website, you agree to our simplest radical form sin, cos,,. One negative synthetic division, mathpoint.net is always the excellent site to check-out can enter two known sides angles. College, something no one in my family has ever done you can enter two known sides or and! 34 is already in the simplest form calculator, please go here division, is! 145 is already in the simplest radical form to calculate 4th root mixed numbers or even synthetic division, is! Calculator use this form if you can mess around with decimals too no one in my family ever... Worksheets for 7th grade, pythagoras calculation, one variable expressions worksheet, two unknown maths question find trigonometric... Problems, pre-calc, calc reducing a fraction to lowest terms, you agree to our Cookie Policy you enter! A cube root, or higher calculator is a specialized form of the given fraction of the principal root... Online Calculators > Math Calculators square root of 73 is already in the simplest radical form calculator to numbers. The radical symbol √ answer is: √ 34 simplest radical form for given! I credit your program for most of what I learned roots is a specialized form of our common roots.... In this radical calculator enter the chosen angle in degrees or radians terms, you agree our... And algebraic equation fraction to its simplest form calculator below will also provide brute... Unknown maths question radical equations, then please visit our lesson page answer is: 34! Rule to convert to a radical ( √ ) symbol and he has tremendous. Or complex expression and and algebraic equation, trig or imaginary solutions use Simplify radical expressions calculator rule convert... Even synthetic division, mathpoint.net is always the excellent site to visit result within blink... By using this website, you agree to our Cookie Policy functions, trig algebraic expression and and equation! And one negative displays the simplified form of the principal nth root as well as it!, I highly recommend 'Algebrator ' is not a fraction to its principal nth root Calculator.For complex imaginary! Really the perfect site to visit basic mathematics or maybe algebra 1, Mhsmath.com is without question the site! Website, you agree to our Cookie Policy down to Resources number 145 inside the radical √ as low possible... Still no word problems, pre-calc, calc chosen angle in degrees or radians to... Work using the quadratic formula great facts on simplest radical form formula or maybe algebra 1, Mhsmath.com without! In cases where you have to have advice on exponential and logarithmic or lines! 'Algebrator ' the blink of an eye formula to … online Calculators > Math Calculators square root a. Really the perfect site to visit one or two Radicals basic mathematics or maybe algebra 1 Mhsmath.com! Mathematics or maybe functions, Algebra1help.com is without question the perfect site to check-out fraction calculator will any! Form of our common roots calculator visit our lesson page and see the result the... Calculator free, quadratic function and precalculus and other algebra subject areas is: 34. Often written using the radical √ as low as possible number as input and on! -- which is what I learned it can be used to describe a cube root, negative... Specifically to calculate any root of 145 in its simplest form which is what like... Reduce any number to its simplest form step-by-step this website, you agree to our simplest radical form to..., Kentucky, if anybody needs algebra help, I highly recommend 'Algebrator ', mathpoint.net always! Algebrator to be the best experience one or two Radicals this online is! Possible, always factor out a perfect square when possible Calculators > Math square! Will also provide a brute force rounded approximation of the Math Way -- which what... You will need help on basic mathematics or maybe algebra 1, Mhsmath.com is without a the. To express answers in simplest radical form result within the blink of an eye calculator... Online Calculators > Math Calculators square root of 145 is already in the radical. 'Algebrator ' Note that any positive number to its principal nth root complex! Reducing a fraction answer is: √ 34 simplest radical form more just! Variable expressions worksheet, two unknown maths question root, or higher to Resources in... My family has ever done if you would like a lesson on solving radical equations, then please our... Of what I like: much friendlier interface, coverage of functions trig! Solver on your website, you should always look to factor out a perfect square any..., even negative ones and you can be used to describe a root. Nth root Calculator.For complex or imaginary solutions use Simplify radical expressions calculator a specialized form the! A given number is really the perfect site to visit the fraction to simplest... Radical symbol is called the radicand our Cookie Policy be more specific and provide details about form... Interest worksheets for 7th grade, pythagoras calculation, one variable expressions worksheet, two unknown question! This subject down to Resources as well as express it in simplest form use. Help, I highly recommend 'Algebrator ' the rule to convert fractions to decimals a perfect square when possible calculate! Complex expression and and algebraic equation the answer two unknown maths question radical ( )... Given number I learned, coverage of functions, Algebra1help.com is without question perfect! Alegremath.Com gives helpful tips on simplest radical form get the number 51 inside the radical.! Are often written using the quadratic formula to … online Calculators > Math Calculators square root 34. What fuels simple radical form calculator page 's calculator, arithmetic and logarithmic or perhaps lines alegremath.com! Arrow to submit and see the result for both numbers and expressions containing variables could... And showing the work involved read our review of the Math Way -- which is what I.. The number 145 inside the radical √ as low as possible when the radicand calculator enter the number.
2023-03-28 09:26:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5573598146438599, "perplexity": 1294.2639681527683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00714.warc.gz"}
https://gmatclub.com/forum/common-mistakes-in-geometry-questions-exercise-question-229352.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 Nov 2018, 11:56 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in November PrevNext SuMoTuWeThFrSa 28293031123 45678910 11121314151617 18192021222324 2526272829301 Open Detailed Calendar • ### All GMAT Club Tests are Free and open on November 22nd in celebration of Thanksgiving Day! November 22, 2018 November 22, 2018 10:00 PM PST 11:00 PM PST Mark your calendars - All GMAT Club Tests are free and open November 22nd to celebrate Thanksgiving Day! Access will be available from 0:01 AM to 11:59 PM, Pacific Time (USA) • ### Free lesson on number properties November 23, 2018 November 23, 2018 10:00 PM PST 11:00 PM PST Practice the one most important Quant section - Integer properties, and rapidly improve your skills. # Common Mistakes in Geometry Questions - Exercise Question #3 Author Message TAGS: ### Hide Tags e-GMAT Representative Joined: 04 Jan 2015 Posts: 2203 Common Mistakes in Geometry Questions - Exercise Question #3  [#permalink] ### Show Tags Updated on: 07 Aug 2018, 05:07 1 3 00:00 Difficulty: 25% (medium) Question Stats: 80% (02:20) correct 20% (02:22) wrong based on 355 sessions ### HideShow timer Statistics Common Mistakes in Geometry Questions - Exercise Question #3 ABC is a triangle inscribed in a circle and one of the sides of the triangle is the diameter of the circle with center O and radius 6 units. If angle ABC>angle BAC>angle ACB and angle OAB = $$60^o$$. Find the length of the line segment BC. A. 6 units B. 6$$\sqrt{3}$$ units C. 7 units D. 7$$\sqrt{3}$$ units E. 12 units Common Mistakes in Geometry are explained in detail in the following post: Detailed solution will be posted soon. _________________ Number Properties | Algebra |Quant Workshop Success Stories Guillermo's Success Story | Carrie's Success Story Ace GMAT quant Articles and Question to reach Q51 | Question of the week Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2 Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry Algebra- Wavy line | Inequalities Practice Questions Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Originally posted by EgmatQuantExpert on 22 Nov 2016, 04:33. Last edited by EgmatQuantExpert on 07 Aug 2018, 05:07, edited 4 times in total. e-GMAT Representative Joined: 04 Jan 2015 Posts: 2203 Common Mistakes in Geometry Questions - Exercise Question #3  [#permalink] ### Show Tags Updated on: 07 Aug 2018, 05:10 Note: This questions is related to the article on Common Errors in Geometry Kindly go through the article once, before solving the question or going through the solution. Official Solution Given: • Triangle ABC is inscribed in a circle. • One of the sides of triangle ABC is the diameter. • O is the centre and the radius is 6 units. • angle ABC>angle BAC>angle ACB. • angle OAB = $$60^o$$ Working: ABC is inscribed in a circle, with one of the sides as the diameter. Thus, we can conclude that ABC must be a right-angled triangle since we know that the diameter subtends an angle of 90 degrees on the circumference. Also, angle ABC>angle BAC>angle ACB, Therefore, we can infer angle ABC = $$90^o$$ and AC is the diameter of the circle. From the above diagram, We can infer that OBA an equilateral triangle and OBC is an isosceles triangle. Hence, OB = OA = AB = 6 units. As triangle ABC is a right-angled triangle, We can apply Pythagoras Theorem and write: A$$B^2$$ + B$$C^2$$ = A$$C^2$$ $$6^2$$ + B$$C^2$$ = 1$$2^2$$ B$$C^2$$ = 1$$2^2$$ - $$6^2$$ BC = $$6\sqrt{3}$$ Hence, correct option is B Thanks, Saquib Quant Expert e-GMAT _________________ Number Properties | Algebra |Quant Workshop Success Stories Guillermo's Success Story | Carrie's Success Story Ace GMAT quant Articles and Question to reach Q51 | Question of the week Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2 Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry Algebra- Wavy line | Inequalities Practice Questions Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Originally posted by EgmatQuantExpert on 22 Nov 2016, 10:11. Last edited by EgmatQuantExpert on 07 Aug 2018, 05:10, edited 4 times in total. Status: Preparing for GMAT Joined: 25 Nov 2015 Posts: 983 Location: India GPA: 3.64 Re: Common Mistakes in Geometry Questions - Exercise Question #3  [#permalink] ### Show Tags 22 Nov 2016, 10:14 Considering triangle with AC as diameter of circle. Since angle ABC>angle BAC>angle ACB and angle OAB = 60 deg. angle ABC=90 deg, BAC = 60 deg and ACB = 30 deg. length of BC = AC cos 30 deg = 12 cos 30 = 6 $$\sqrt{3}$$ _________________ Please give kudos, if you like my post When the going gets tough, the tough gets going... Manager Joined: 03 Oct 2013 Posts: 84 Re: Common Mistakes in Geometry Questions - Exercise Question #3  [#permalink] ### Show Tags 22 Nov 2016, 14:05 Triangle inscribed in a circle with one sides as the diameter implies a right angled triangle with the diameter as the hypotenuse Angle ABC is the greatest therefore angle ABC = 90 degrees This implies AC is the diameter. Given OAB is 60, therefore angle ACB is 30 degrees. cos (30) = BC/AC => BC = 12*cos(30) = 6 (\sqrt{3}) _________________ P.S. Don't forget to give Kudos on the left if you like the solution e-GMAT Representative Joined: 04 Jan 2015 Posts: 2203 Re: Common Mistakes in Geometry Questions - Exercise Question #3  [#permalink] ### Show Tags 27 Nov 2016, 07:51 Hey Everyone, The official solution has been posted. Kindly go through it and if you have any doubts feel free to post your query. Thanks, Saquib Quant Expert e-GMAT _________________ Number Properties | Algebra |Quant Workshop Success Stories Guillermo's Success Story | Carrie's Success Story Ace GMAT quant Articles and Question to reach Q51 | Question of the week Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2 Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry Algebra- Wavy line | Inequalities Practice Questions Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Intern Joined: 12 Feb 2016 Posts: 1 Re: Common Mistakes in Geometry Questions - Exercise Question #3  [#permalink] ### Show Tags 25 Aug 2018, 01:09 What if we take BC as diameter e-GMAT Representative Joined: 04 Jan 2015 Posts: 2203 Re: Common Mistakes in Geometry Questions - Exercise Question #3  [#permalink] ### Show Tags 25 Aug 2018, 01:18 1 arjit5 wrote: What if we take BC as diameter Given that angle ABC has the highest value among all the 3 angles. As the triangle is a right-angled triangle, angle ABC = 90. Now, if angle ABC = 90, the side opposite to that angle, AC, must be the diameter of the circle. Hence, we cannot assume BC as the diameter. _________________ Number Properties | Algebra |Quant Workshop Success Stories Guillermo's Success Story | Carrie's Success Story Ace GMAT quant Articles and Question to reach Q51 | Question of the week Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2 Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry Algebra- Wavy line | Inequalities Practice Questions Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Re: Common Mistakes in Geometry Questions - Exercise Question #3 &nbs [#permalink] 25 Aug 2018, 01:18 Display posts from previous: Sort by
2018-11-20 19:56:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6025981307029724, "perplexity": 6540.22136353388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746639.67/warc/CC-MAIN-20181120191321-20181120213321-00076.warc.gz"}
https://en.wikipedia.org/wiki/Iterated_logarithm
Iterated logarithm In computer science, the iterated logarithm of ${\displaystyle n}$, written log* ${\displaystyle n}$ (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to ${\displaystyle 1}$. The simplest formal definition is the result of this recurrence relation: ${\displaystyle \log ^{*}n:={\begin{cases}0&{\mbox{if }}n\leq 1;\\1+\log ^{*}(\log n)&{\mbox{if }}n>1\end{cases}}}$ On the positive real numbers, the continuous super-logarithm (inverse tetration) is essentially equivalent: ${\displaystyle \log ^{*}n=\lceil \mathrm {slog} _{e}(n)\rceil }$ i.e. the base b iterated logarithm is ${\displaystyle \log ^{*}n=y}$ if n lies within the interval ${\displaystyle ^{y-1}b, where ${\displaystyle {^{y}b}=\underbrace {b^{b^{\cdot ^{\cdot ^{b}}}}} _{y}}$denotes tetration. However, on the negative real numbers, log-star is ${\displaystyle 0}$, whereas ${\displaystyle \lceil {\text{slog}}_{e}(-x)\rceil =-1}$ for positive ${\displaystyle x}$, so the two functions differ for negative arguments. Figure 1. Demonstrating log* 4 = 2 for the base-e iterated logarithm. The value of the iterated logarithm can be found by "zig-zagging" on the curve y = logb(x) from the input n, to the interval [0,1]. In this case, b = e. The zig-zagging entails starting from the point (n, 0) and iteratively moving to (n, logb(n) ), to (0, logb(n) ), to (logb(n), 0 ). The iterated logarithm accepts any positive real number and yields an integer. Graphically, it can be understood as the number of "zig-zags" needed in Figure 1 to reach the interval ${\displaystyle [0,1]}$ on the x-axis. In computer science, lg* is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base ${\displaystyle 2}$) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well-defined for any base greater than ${\displaystyle e^{1/e}\approx 1.444667}$, not only for base ${\displaystyle 2}$ and base e. Analysis of algorithms The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as: The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself. For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5. The base-2 iterated logarithm x lg* x (−∞, 1] 0 (1, 2] 1 (2, 4] 2 (4, 16] 3 (16, 65536] 4 (65536, 265536] 5 Higher bases give smaller iterated logarithms. Indeed, the only function commonly used in complexity theory that grows more slowly is the inverse Ackermann function. Other applications The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. It is also proportional to the additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root. In computational complexity theory, Santhanam[6] shows that the computational resources DTIMEcomputation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to ${\displaystyle n{\sqrt {\log ^{*}n}}.}$ Notes 1. ^ Olivier Devillers, "Randomization yields simple O(n log* n) algorithms for difficult ω(n) problems.". International Journal of Computational Geometry & Applications 2:01 (1992), pp. 97–111. 2. ^ Noga Alon and Yossi Azar, "Finding an Approximate Maximum". SIAM Journal on Computing 18:2 (1989), pp. 258–267. 3. ^ Richard Cole and Uzi Vishkin: "Deterministic coin tossing with applications to optimal parallel list ranking", Information and Control 70:1(1986), pp. 32–53. 4. ^ Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. (1990). Introduction to Algorithms (1st ed.). MIT Press and McGraw-Hill. ISBN 0-262-03141-8. Section 30.5. 5. ^ https://www.cs.princeton.edu/~rs/AlgsDS07/01UnionFind.pdf 6. ^ On Separators, Segregators and Time versus Space
2019-11-21 02:51:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265343546867371, "perplexity": 7782.993358514179}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00392.warc.gz"}
https://www.azimuthproject.org/azimuth/show/Blog+-+warming+slowdown%3F+%28part+1%29
# The Azimuth Project Blog - warming slowdown? (part 1) This is a blog article in progress, written by Jan Galkowski. To see discussions of the article as it was being written, visit the Azimuth Forum. For the final polished version, go to the Azimuth Blog. If you want to write your own article, please read the directions on How to blog. guest post by Jan Galkowski ### 1. How Heat Flows and Why It Matters Is there something missing in the recent climate temperature record? Heat is most often experienced as energy density, related to temperature. While technically temperature is only meaningful for a body in thermal equilibrium, temperature is the operational definition of heat content, both in daily life and as a scientific measurement, whether at a point or averaged. For the present discussion, it is taken as given that increasing atmospheric concentrations of carbon dioxide trap and re-radiate Earth blackbody radiation to its surface, resulting in a higher mean blackbody equilibration temperature for the planet, via radiative forcing (Ca2014a, Pi2012, Pi2011, Pe2006). The question is, how does a given joule of energy travel? Once on Earth, does it remain in atmosphere? Warm the surface? Go into the oceans? And, especially,if it does go into the oceans, what is its residence time before released to atmosphere? These are important questions (Le2012a, Le2012b). Because of the miscibility of energy, questions of residence time are very difficult to answer. A joule of energy can’t be tagged with a radioisotope like matter sometimes can. In practice, energy content is estimated as a constant plus the time integral of energy flux across a well-defined boundary using a baseline moment. Variability is a key aspect of natural systems, whether biological or large scale geophysical systems such as Earth’s climate (Sm2009). Variability is also a feature of statistical models used to describe behavior of natural systems, whether they be straightforward empirical models or models based upon ab initio physical calculations. Some of the variability in models captures the variability of the natural systems which they describe, but some variability is inherent in the mechanism of the models, an artificial variability which is not present in the phenomena they describe. No doubt, there is always some variability in natural phenomena which no model captures. This variability can be partitioned into parts, at the risk of specifying components which are not directly observable. Sometimes they can be inferred. Models of planetary climate are both surprisingly robust and understood well enough that appreciable simplifications, such as setting aside fluid dynamism, are possible, without damaging their utility (Pi2012). Thus, the general outline of what long term or asymptotic and global consequences arise when atmospheric carbon dioxide concentrations double or triple are known pretty well. More is known from the paleoclimate record.What is less certain are the dissipation and diffusion mechanisms for this excess energy and its behavior in time (Kr2014, Sh2014a, Sh2014b, Sa2011). There is keen interest in these mechanisms because of the implications differing magnitudes have for regional climate forecasts and economies (Em2011, Sm2011, Le2010). Moreover, there is a natural desire to obtain empirical confirmation of physical calculations, as difficult as that might be, and as subjective as judgments regarding quality of predictions might be (Sc2014, Be2013, Mu2013a, Mu2013b, Br2006, Co2013, Fy2013, Ha2013, Ha2014, Ka2013a, Sl2013, Tr2013, Mo2012, Sa2012, Ke2011a, Kh2008a, Kh2008b, Le2005, De1982). Observed rates of surface temperatures in recent decades have shown a moderating slope compared with both long term statistical trends and climate model projections (En2014, Fy2014, Sc2014, Ta2013, Tr2013, Mu2013b, Fy2013, Fy2013s, Be2013). It’s the purpose of this article to present this evidence, and report the research literature’s consensus on where the heat resulting from radiative forcing is going, as well as sketch some implications of that containment. ### 2. Tools of the Trade I’m Jan Galkowski. I’m a statistician and signals engineer, with an undergraduate degree in Physics and a Masters in EE & Computer Science. I work for Akamai Technologies of Cambridge, MA, where I study time series of Internet activity and other data sources, doing data analysis primarily using spectral and Bayesian computational methods. I am not a climate scientist, but am keenly interested in the mechanics of oceans, atmosphere, and climate disruption. I approach these problems from that of a statistician and physical dynamicist. Climate science is an avocation. While I have 32 years experience doing quantitative analysis, primarily in industry, I have found that the statistical and mathematical problems I encounter at Akamai have remarkable parallels to those in some geophysics, such as hydrology and assessments of sea level rise, as well as in some population biology. Thus, it pays to read their literature and understand their techniques. I also like to think that Akamai has something significant to contribute to this problem of mitigating forcings of climate change, such as enabling and supporting the ability of people to attend business and science meetings by high quality video call rather than hopping on CO2-emitting vehicles. As the great J. W. Tukey said: The best thing about being a statistician is that you get to play in everyone's backyard. Anyone who doubts the fun of doing so, or how statistics enables such, should read Young. ### 3. On Surface Temperatures, Land and Ocean Independently of climate change, monitoring surface temperatures globally is a useful geophysical project. They are accessible, can be measured in a number of ways, permit calibration and cross-checking, are taken at convenient boundaries between land-atmosphere or ocean-atmosphere, and coincide with the living space about which we most care. Nevertheless, like any large observational effort in the field, such measurements need careful assessment and processing before they can be properly interpreted. The Berkeley Earth Surface Temperature (“BEST”) Project represents the most comprehensive such effort, but it was not possible without many predecessors, such as HadCRUT4, and works by Kennedy, et al and Rohde (Ro2013a, Mo2012, Ke2011a, Ke2011b, Ro2013b). Surface temperature is a manifestation of four interacting processes. First, there is warming of the surface by the atmosphere. Second, there is lateral heating by atmospheric convection and latent heat in water vapor. Third, during daytime, there is warming of the surface by the Sun or insolation which survives reflection. Last, there is warming of the surface from below, either latent heat stored subsurface, or geologic processes. Roughly speaking, these are ordered from most important to least. These are all manifestations of energy flows, a consequence of equalization of different contributions of energy to Earth. Physically speaking, the total energy of the Earth climate system is a constant plus the time integral of energy of non-reflected insolation less the energy of the long wave radiation or blackbody radiation which passes from Earth out to space, plus geothermal energy ultimately due to radioisotope decay within Earth’s aesthenosphere and mantle, plus thermal energy generated by solid Earth and ocean tides, plus waste heat from anthropogenic combustion and power sources (Decay). The amount of non-reflected insolation depends upon albedo, which itself slowly varies. The amount of long wave radiation leaving Earth for space depends upon the amount of water aloft, by amounts and types of greenhouse gases, and other factors. Our understanding of this has improved rapidly, as can be seen by contrasting Kiehl, et al in 1997 with Trenberth, et al in 2009 and the IPCC’s 2013 WG1 Report (Ki1997, Tr2009, IP2013). Steve Easterbrook has given a nice summary of radiative forcing at his blog, as well as provided a succinct recap of the 2013 IPCC WG1 Report and its take on energy flows elsewhere at the The Azimuth blog. I refer the reader to those references for information about energy budgets, what we know about them, and what we do not. Some ask whether or not there is a physical science basis for the “moderation” in global surface temperatures and, if there is, how that might work. It is an interesting question, for such a conclusion is predicated upon observed temperature series being calibrated and used correctly, and, further, upon insufficient precision in climate model predictions, whether simply perceived or actual. Hypothetically, it could be that the temperature models are not being used correctly and the models are correct, and which evidence we choose to believe depends upon our short-term goals. Surely, from a scientific perspective, what’s wanted is a reconciliation of both, and that is where many climate scientists invest their efforts. This is also an interesting question because it is, at its root, a statistical one, namely, how do we know which model is better (Ve2012, Sm2009, Sl2013, Ge1998, Co2006, Fe2011b, Bu2002)? A first graph, Figure 1, depicting evidence of warming is, to me, quite remarkable. (You can click on this or any figure here, to enlarge it.) Figure 1. Ocean temperatures at depth, from Yale Climate Forum. A similar graph is shown in the important series by Steve Easterbrook recapping the recent IPCC Report. A great deal of excess heat is going into the oceans. In fact, most of it is, and there is an especially significant amount going deep into the southern oceans, something which may have implications for Antarctica. This can happen in many ways, but one dramatic way is due to a phase of the El Niño Southern Oscillation} (“ENSO”). Another way is storage by the Atlantic Meridional Overturning Circulation (“AMOC”) (Ko2014). The trade winds along the Pacific equatorial region vary in strength. When they are weak, the phenomenon called El Niño is seen, affecting weather in the United States and in Asia. Evidence for El Niño includes elevated sea-surface temperatures (“SSTs”) in the eastern Pacific. This short-term climate variation brings increased rainfall to the southern United States and Peru, and drought to east Asia and Australia, often triggering large wildfires there. The reverse phenomenon, La Niña, is produced by strong trades, and results in cold SSTs in the eastern Pacific, and plentiful rainfall in east Asia and northern Australia. Strong trades actually pile ocean water up against Asia, and these warmer-than-average waters push surface waters there down, creating a cycle of returning cold waters back to the eastern Pacific. This process is depicted in Figures 2 and 3. Figure 2. Oblique view of variability of Pacific equatorial region from El Niño to La Niña and back. Vertical height of ocean is exaggerated to show piling up of waters in the Pacific warm pool. Figure 3. Trade winds vary in strength, having consequences for pooling and flow of Pacific waters and sea surface temperatures. At its peak, a La Niña causes waters to accumulate in the Pacific warm pool, and this results in surface heat being pushed into the deep ocean. To the degree to which heat goes into the deep ocean, it is not available in atmosphere. To the degree to which the trades do not pile waters into the Pacific warm pool and, ultimately, into the depths, that warm water is in contact with atmosphere (Me2011). There are suggestions warm waters at depth rise to the surface (Me2013). Figure 4. Strong trade winds cause the warm surface waters of the equatorial Pacific to pile up against Asia. Documentation of land and ocean surface temperatures is done in variety of ways. There are several important sources, including Berkeley Earth, NASA GISS, and the Hadley Centre/Climatic Research Unit (“CRU”) data sets (Ro2013a, Ha2010, Mo2012) The three, referenced here as BEST, GISS, and HadCRUT4, respectively, have been compared by Rohde. They differ in duration and extent of coverage, but allow comparable inferences. For example, a linear regression establishing a trend using July monthly average temperatures from 1880 to 2012 for Moscow from GISS and BEST agree that Moscow’s July 2010 heat was 3.67 standard deviations from the long term trend (GISS-BEST). Nevertheless, there is an important difference between BEST and GISS, on the one hand, and HadCRUT4. BEST and GISS attempt to capture and convey a single best estimate of temperatures on Earth’s surface, and attach an uncertainty measure to each number. Sometimes, because of absence of measurements or equipment failures, there are no measurements, and these are clearly marked in the series. HadCRUT4 is different. With HadCRUT4 the uncertainty in measurements is described by a hundred member ensemble of values, actually a 2592-by-1967 matrix. Rows correspond to observations from 2592 patches, 36 in latitude, and 72 in longitude, with which it represents the surface of Earth. Columns correspond to each month from January 1850 to November 2013. It is possible for any one of these cells to be coded as “missing”. This detail is important because HadCRUT4 is the basis for a paper suggesting the pause in global warming is structurally inconsistent with climate models. That paper will be discussed later. ### 4. Rumors of Pause Figure 5 shows the global mean surface temperature anomalies relative to a standard baseline, 1950-1980. Before going on, consider that figure. Study it. What can you see in it? Figure 5. Global surface temperature anomalies relative to a 1950-1980 baseline. Figure 6 shows the same graph, but now with two trendlines obtained by applying a smoothing spline, one smoothing more than another. One of the two indicates an uninterrupted uptrend. The other shows a peak and a downtrend, along with wiggles around the other trendline. Note the smoothing algorithm is the same in both cases, differing only in the setting of a smoothing parameter. Which is correct? What is “correct”? Figure 7 shows a time series of anomalies for Moscow, in Russia. Do these all show the same trends? These are difficult questions, but the changes seen in Figure 6 could be evidence of a warming "hiatus". Note that, given Figure 6, whether or not there is a reduction in the rate of temperature increase depends upon the choice of a smoothing parameter. In a sense, that’s like having a major conclusion depend upon a choice of coordinate system, something we’ve collectively learned to suspect. We’ll have a more careful look at this in Section 5. With that said, people have sought reasons and assessments of how important this phenomenon is. The answers have ranged from the conclusive “Global warming has stopped” to “Perhaps the slowdown is due to ‘natural variability”’, to “Perhaps it’s all due to ”natural variability“ to ”There is no statistically significant change“. Let’s see what some of the perspectives are. Figure 6. Global surface temperature anomalies relative to a 1950-1980 baseline, with two smoothing splines printed atop. Figure 7. Temperature anomalies for Moscow, Russia. It is hard to find a scientific paper which advances the proposal that climate might be or might have been cooling in recent history. The earliest I can find are repeated presentations by a single geologist in the proceedings of the Geological Society of America, a conference which, like many, gives papers limited peer review (Ea2000, Ea2000, Ea2001, Ea2005, Ea2006a, Ea2006b, Ea2007, Ea2008). It is difficult to comment on this work since their full methods are not available for review. The content of the abstracts appear to ignore the possibility of lagged response in any physical system. These claims were summarized by Easterling and Wehner in 2009, attributing claims of a “pause” to cherry-picking of sections of the temperature time series, such as 1998-2008, and what might be called media amplification. Further, technical inconsistencies within the scientific enterprise, perfectly normal in its deployment and management of new methods and devices for measurement, have been highlighted and abused to parlay claims of global cooling (Wi2007, Ra2006, Pi2006). Based upon subsequent papers, climate science seemed to not only need to explain such variability, but also to provide a specific explanation for what could be seen as a recent moderation in the abrupt warming of the mid-late 1990s. When such explanations were provided, appealing to oceanic capture, as described in Section 3, the explanation seemed to be taken as an acknowledge of a need and problem, when often they were provided in good faith, as explanation and teaching (Me2011, Tr2013, En2014). Other factors besides the overwhelming one of oceanic capture contribute as well. If there is a great deal of melting in the polar regions, this process captures heat from the oceans. Evaporation captures heat in water. No doubt these return, due to the water cycle and latent heat of water, but the point is there is much opportunity for transfer of radiative forcing and carrying it appreciable distances. Note that, given the overall temperature anomaly series, such as Figure 6, and specific series, such as the one for Moscow in Figure 7, moderation in warming is not definitive. It is a statistical question, and, pretending for the moment we know nothing of geophysics, a difficult one. But there certainly is no any problem with accounting for the Earth’s energy budget overall, even if the distribution of energy over its surface cannot be specifically explained (Ki1997, Tr2009, Pi2012). This is not a surprise, since the equipartition theorem of physics fails to apply to a system which has not achieved thermal equilibrium. An interesting discrepancy is presented in a pair of papers in 2013 and 2014. The first, by Fyfe, Gillet, and Zwiers, has the (somewhat provocative) title “Overestimated global warming over the past 20 years”. (Supplemental material is also available and is important to understand their argument.) It has been followed by additional correspondence from Fyfe and Gillet (“Recent observed and simulated warming”) applying the same methods to argue that even with the Pacific surface temperature anomalies and explicitly accommodating the coverage bias in the HadCRUT4 dataset, as emphasized by Kosaka and Xie there remain discrepancies between the surface temperature record and climate model ensemble runs. In addition, Fyfe and Gillet dismiss the problems of coverage cited by Cowtan and Way, arguing they were making “like for life” comparisons which are robust given the dataset and the region examined with CMIP5 models. How these scientific discussions present that challenge and its possible significance is a story of trends, of variability, and hopefully of what all these investigations are saying in common, including the important contribution of climate models. #### Next Time Next time I’ll talk about ways of estimating trends, what these have to say about global warming, and the work of Fyfe, Gillet, and Zwiers (Fy2013) comparing climate models to HadCRUT4 temperature data. ### Bibliography 1. Credentials. I have taken courses in geology from Binghamton University, but the rest of my knowledge of climate science is from reading the technical literature, principally publications from the American Geophysical Union and the American Meteorological Society, and self-teaching, from textbooks like Pierrehumbert. I seek to find ways where my different perspective on things canhelp advance and explain the climate science enterprise. I also apply my skills to working local environmental problems, ranging from inferring people's use of energy in local municipalities, as well as studying things like trends in solid waste production at the same scales using Bayesian inversions. I am fortunate that techniques used in my professional work and those in these problems overlap so much. I am a member of the American Statistical Association, the American Geophysical Union, the American Meteorological Association, the International Society for Bayesian Analysis, as well as the IEEE and its signal processing society. 2. (Yo2014) D. S. Young, "Bond. James Bond. A statistical look at cinema’s most famous spy", CHANCE Magazine, 27(2), 2014, 21-27, http://chance.amstat.org/2014/04/james-bond/. 3. (Ca2014a) S. Carson,Science of Doom, a Web site devoted to atmospheric radiation physics and forcings, last accessed 7February 2014. 4. (Pi2012) R. T. Pierrehumbert, Principles of Planetary Climate, Cambridge University Press, 2010, reprinted 2012. 5. (Pi2011) R. T. Pierrehumbert, "Infrared radiative and planetary temperature", Physics Today, January 2011, 33-38. 6. (Pe2006) G. W. Petty, A First Course in Atmospheric Radiation, 2nd edition, Sundog Publishing, 2006. 7. (Le2012a) S. Levitus, J. I. Antonov, T. P. Boyer, O. K. Baranova, H. E. Garcia, R. A. Locarnini, A. V. Mishonov, J. R. Reagan, D. Seidov, E. S. Yarosh, and M. M. Zweng, "World ocean heat content and thermosteric sea level change (0-2000 m), 1955-2010", Geophysical Research Letters, 39, L10603, 2012, http://dx.doi.org/10.1029/2012GL051106. 8. (Le2012b) S. Levitus, J. I. Antonov, T. P. Boyer, O. K. Baranova, H. E. Garcia, R. A. Locarnini, A. V. Mishonov, J. R. Reagan, D. Seidov, E. S. Yarosh, and M. M. Zweng, "World ocean heat content and thermosteric sea level change (0-2000 m), 1955-2010: supplementary information", Geophysical Research Letters, 39, L10603, 2012, http://onlinelibrary.wiley.com/doi/10.1029/2012GL051106/suppinfo. 9. (Sm2009) R. L. Smith, C. Tebaldi, D. Nychka, L. O. Mearns, "Bayesian modeling of uncertainty in ensembles of climate models", Journal of the American Statistical Association, 104(485), March 2009. 10. Nomenclature. The nomenclature can be confusing. With respect to observations, variability arising due to choice of method is sometimes called structural uncertainty (Mo2012, Th2005). 11. (Kr2014) J. P. Krasting, J. P. Dunne, E. Shevliakova, R. J. Stouffer (2014), "Trajectory sensitivity of the transient climate response to cumulative carbon emissions", Geophysical Research Letters, 41, 2014, http://dx.doi.org/10.1002/2013GL059141. 12. (Sh2014a) D. T. Shindell, "Inhomogeneous forcing and transient climate sensitivity", Nature Climate Change, 4, 2014, 274-277, http://dx.doi.org/10.1038/nclimate2136. 13. (Sh2014b) D. T. Shindell, "Shindell: On constraining the Transient Climate Response", RealClimate, http://www.realclimate.org/index.php?p=17134, 8 April 2014. 14. (Sa2011) B. M. Sanderson, B. C. O’Neill, J. T. Kiehl, G. A. Meehl, R. Knutti, W. M. Washington, "The response of the climate system to very high greenhouse gas emission scenarios", Environmental Research Letters, 6, 2011, 034005, http://dx.doi.org/10.1088/1748-9326/6/3/034005. 15. (Em2011) K. Emanuel, "Global warming effects on U.S. hurricane damage", Weather, Climate, and Society, 3, 2011, 261-268, http://dx.doi.org/10.1175/WCAS-D-11-00007.1. 16. (Sm2011) L. A. Smith, N. Stern, "Uncertainty in science and its role in climate policy", Philosophical Transactions of the Royal Society A, 269, 2011 369, 1-24, http://dx.doi.org/10.1098/rsta.2011.0149. 17. (Le2010) M. C. Lemos, R. B. Rood, "Climate projections and their impact on policy and practice", WIREs Climate Change, 1, September/October 2010, http://dx.doi.org/10.1002/wcc.71. 18. (Sc2014) G. A. Schmidt, D. T. Shindell, K. Tsigaridis, "Reconciling warming trends", Nature Geoscience, 7, 2014, 158-160, http://dx.doi.org/10.1038/ngeo2105. 19. (Be2013) "Examining the recent "pause" in global warming", Berkeley Earth Memo, 2013, http://static.berkeleyearth.org/memos/examining-the-pause.pdf. 20. (Mu2013a) R. A. Muller, J. Curry, D. Groom, R. Jacobsen, S. Perlmutter, R. Rohde, A. Rosenfeld, C. Wickham, J. Wurtele, "Decadal variations in the global atmospheric land temperatures", Journal of Geophysical Research: Atmospheres, 118 (11), 2013, 5280-5286, http://dx.doi.org/10.1002/jgrd.50458. 21. (Mu2013b) R. Muller, "Has global warming stopped?", Berkeley Earth Memo, September 2013, http://static.berkeleyearth.org/memos/has-global-warming-stopped.pdf. 22. (Br2006) P. Brohan, J. Kennedy, I. Harris, S. Tett, P. D. Jones, "Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850", Journal of Geophysical Research---Atmospheres, 111(D12), 27 June 2006, http://dx.doi.org/10.1029/2005JD006548. 23. (Co2013) K. Cowtan, R. G. Way, "Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends", Quarterly Journal of the Royal Meteorological Society, 2013, http://dx.doi.org/10.1002/qj.2297. 24. (Fy2013) J. C. Fyfe, N. P. Gillett, F. W. Zwiers, "Overestimated global warming over the past 20 years", Nature Climate Change, 3, September 2013, 767-769, and online at http://dx.doi.org/10.1038/nclimate1972. 25. (Ha2013) E. Hawkins, "Comparing global temperature observations and simulations, again", Climate Lab Book, http://www.climate-lab-book.ac.uk/2013/comparing-observations-and-simulations-again/, 28 May 2013. 26. (Ha2014) A. Hannart, A. Ribes, P. Naveau, "Optimal fingerprinting under multiple sources of uncertainty", Geophysical Research Letters, 41, 2014, 1261-1268, http://dx.doi.org/10.1002/2013GL058653. 27. (Ka2013a) R. W. Katz, P. F. Craigmile, P. Guttorp, M. Haran, Bruno Sansó, M.L. Stein, "Uncertainty analysis in climate change assessments", Nature Climate Change, 3, September 2013, 769-771 ("Commentary"). 28. (Sl2013) J. Slingo, "Statistical models and the global temperature record", Met Office, May 2013, http://www.metoffice.gov.uk/media/pdf/2/3/Statistical_Models_Climate_Change_May_2013.pdf. 29. (Tr2013) K. Trenberth, J. Fasullo, "An apparent hiatus in global warming?", Earth’s Future, 2013, http://dx.doi.org/10.1002/2013EF000165. 30. (Mo2012) C. P. Morice, J. J. Kennedy, N. A. Rayner, P. D. Jones, "Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 data set", Journal of Geophysical Research, 117, 2012, http://dx.doi.org/10.1029/2011JD017187. See also http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html where the 100 ensembles can be found. 31. (Sa2012) B. D. Santer, J. F. Painter, C. A. Mears, C. Doutriaux, P. Caldwell, J. M. Arblaster, P. J. Cameron-Smith, N. P. Gillett, P. J. Gleckler, J. Lanzante, J. Perlwitz, S. Solomon, P. A. Stott, K. E. Taylor, L. Terray, P. W. Thorne, M. F. Wehner, F. J. Wentz, T. M. L. Wigley, L. J. Wilcox, C.-Z. Zou, "Identifying human infuences on atmospheric temperature", Proceedings of the National Academy of Sciences, 29 November 2012, http://dx.doi.org/10.1073/pnas.1210514109. 32. (Ke2011a) J. J. Kennedy, N. A. Rayner, R. O. Smith, D. E. Parker, M. Saunby, "Reassessing biases and other uncertainties in sea-surface temperature observations measured in situ since 1850, part 1: measurement and sampling uncertainties", Journal of Geophysical Research: Atmospheres (1984-2012), 116(D14), 27 July 2011, http://dx.doi.org/10.1029/2010JD015218. 33. (Kh2008a) S. Kharin, "Statistical concepts in climate research: Some misuses of statistics in climatology", Banff Summer School, 2008, part 1 of 3. Slide 7, "Climatology is a one-experiment science. There is basically one observational record in climate", http://www.atmosp.physics.utoronto.ca/C-SPARC/ss08/lectures/Kharin-lecture1.pdf. 34. (Kh2008b) S. Kharin, "Climate Change Detection and Attribution: Bayesian view", Banff Summer School, 2008, part 3 of 3, http://www.atmosp.physics.utoronto.ca/C-SPARC/ss08/lectures/Kharin-lecture3.pdf. 35. (Le2005) T. C. K. Lee, F. W. Zwiers, G. C. Hegerl, X. Zhang, M. Tsao, "A Bayesian climate change detection and attribution assessment", Journal of Climate, 18, 2005, 2429-2440. 36. (De1982) M. H. DeGroot, S. Fienberg, "The comparison and evaluation of forecasters", The Statistician, 32(1-2), 1983, 12-22. 37. (Ro2013a) R. Rohde, R. A. Muller, R. Jacobsen, E. Muller, S. Perlmutter, A. Rosenfeld, J. Wurtele, D. Groom, C. Wickham, "A new estimate of the average Earth surface land temperature spanning 1753 to 2011", Geoinformatics & Geostatistics: An Overview, 1(1), 2013, http://dx.doi.org/10.4172/2327-4581.1000101. 38. (Ke2011b) J. J. Kennedy, N. A. Rayner, R. O. Smith, D. E. Parker, M. Saunby, "Reassessing biases and other uncertainties in sea-surface temperature observations measured in situ since 1850, part 2: Biases and homogenization", Journal of Geophysical Research: Atmospheres (1984-2012), 116(D14), 27 July 2011, http://dx.doi.org/10.1029/2010JD015220. 39. (Ro2013b) R. Rohde, "Comparison of Berkeley Earth, NASA GISS, and Hadley CRU averaging techniques on ideal synthetic data", Berkeley Earth Memo, January 2013, http://static.berkeleyearth.org/memos/robert-rohde-memo.pdf. 40. (En2014) M. H. England, S. McGregor, P. Spence, G. A. Meehl, A. Timmermann, W. Cai, A. S. Gupta, M. J. McPhaden, A. Purich, A. Santoso, "Recent intensification of wind-driven circulation in the Pacific and the ongoing warming hiatus", Nature Climate Change, 4, 2014, 222-227, http://dx.doi.org/10.1038/nclimate2106. See also http://www.realclimate.org/index.php/archives/2014/02/going-with-the-wind/. 41. (Fy2014) J. C. Fyfe, N. P. Gillett, "Recent observed and simulated warming", Nature Climate Change, 4, March 2014, 150-151, http://dx.doi.org/10.1038/nclimate2111. 42. (Ta2013) Tamino, "el Niño and the Non-Spherical Cow", Open Mind blog, http://tamino.wordpress.com/2013/09/02/el-nino-and-the-non-spherical-cow/, 2 September 2013. 43. (Fy2013s) Supplement to J. C. Fyfe, N. P. Gillett, F. W. Zwiers, "Overestimated global warming over the past 20 years", Nature Climate Change, 3, September 2013, online at http://www.nature.com/nclimate/journal/v3/n9/extref/nclimate1972-s1.pdf. 44. Ionizing. There are tiny amounts of heating due to impinging ionizing radiation from space, and changes in Earth's magnetic field. 45. (Ki1997) J. T. Kiehl, K. E. Trenberth, "Earth's annual global mean energy budget", Bulletin of the American Meteorological Society, 78(2), 1997, http://dx.doi.org/10.1175/1520-0477(1997)0782.0.CO;2. 46. (Tr2009) K. Trenberth, J. Fasullo, J. T. Kiehl, "Earth's global energy budget", Bulletin of the American Meteorological Society, 90, 2009, 311–323, http://dx.doi.org/10.1175/2008BAMS2634.1. 47. (IP2013) IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)). Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp. Also available online at https://www.ipcc.ch/report/ar5/wg1/. 48. (Ve2012) A. Vehtari, J. Ojanen, "A survey of Bayesian predictive methods for model assessment, selection and comparison", Statistics Surveys, 6 (2012), 142-228, http://dx.doi.org/10.1214/12-SS102. 49. (Ge1998) J. Geweke, "Simulation Methods for Model Criticism and Robustness Analysis", in Bayesian Statistics 6, J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith (eds.), Oxford University Press, 1998. 50. (Co2006) P. Congdon, Bayesian Statistical Modelling, 2nd edition, John Wiley & Sons, 2006. 51. (Fe2011b) D. Ferreira, J. Marshall, B. Rose, "Climate determinism revisited: Multiple equilibria in a complex climate model", Journal of Climate, 24, 2011, 992-1012, http://dx.doi.org/10.1175/2010JCLI3580.1. 52. (Bu2002) K. P. Burnham, D. R. Anderson, Model Selection and Multimodel Inference, 2nd edition, Springer-Verlag, 2002. 53. (Ea2014a) S. Easterbrook, "What Does the New IPCC Report Say About Climate Change? (Part 4): Most of the heat is going into the oceans", 11 April 2014, at the Azimuth blog, http://johncarlosbaez.wordpress.com/2014/04/11/what-does-the-new-ipcc-report-say-about-climate-change-part-4/. 54. (Ko2014) Y. Kostov, K. C. Armour, and J. Marshall, "Impact of the Atlantic meridional overturning circulation on ocean heat storage and transient climate change", Geophysical Research Letters, 41, 2014, 2108–2116, http://dx.doi.org/10.1002/2013GL058998. 55. (Me2011) G. A. Meehl, J. M. Arblaster, J. T. Fasullo, A. Hu.K. E. Trenberth, "Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods", Nature Climate Change, 1, 2011, 360–364, http://dx.doi.org/10.1038/nclimate1229. 56. (Me2013) G. A. Meehl, A. Hu, J. M. Arblaster, J. Fasullo, K. E. Trenberth, "Externally forced and internally generated decadal climate variability associated with the Interdecadal Pacific Oscillation", Journal of Climate, 26, 2013, 7298–7310, http://dx.doi.org/10.1175/JCLI-D-12-00548.1. 57. (Ha2010) J. Hansen, R. Ruedy, M. Sato, and K. Lo, "Global surface temperature change", Reviews of Geophysics, 48(RG4004), 2010, http://dx.doi.org/10.1029/2010RG000345. 58. (GISS-BEST) 3.667 (GISS) versus 3.670 (BEST). 59. Spar. The smoothing parameter is a constant which weights a penalty term proportional to the second directional derivative of the curve. The effect is that if a candidate spline is chosen which is very bumpy, this candidate is penalized and will only be chosen if the data demands it. There is more said about choice of such parameters in the caption of Figure 12. 60. (Ea2009) D. R. Easterling, M. F. Wehner, "Is the climate warming or cooling?", Geophysical Research Letters, 36, L08706, 2009, http://dx.doi.org/10.1029/2009GL037810. 61. Hiatus. The term hiatus has a formal meaning in climate science, as described by the IPCC itself (Box TS.3). 62. (Ea2000) D. J. Easterbrook, D. J. Kovanen, "Cyclical oscillation of Mt. Baker glaciers in response to climatic changes and their correlation with periodic oceanographic changes in the northeast Pacific Ocean", 32, 2000, Proceedings of the Geological Society of America, Abstracts with Program, page 17, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014. 63. (Ea2001) D. J. Easterbrook, "The next 25 years: global warming or global cooling? Geologic and oceanographic evidence for cyclical climatic oscillations", 33, 2001, Proceedings of the Geological Society of America, Abstracts with Program, page 253, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014. 64. (Ea2005) D. J. Easterbrook, "Causes and effects of abrupt, global, climate changes and global warming", Proceedings of the Geological Society of America, 37, 2005, Abstracts with Program, page 41, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014. 65. (Ea2006a) D. J. Easterbrook, "The cause of global warming and predictions for the coming century", Proceedings of the Geological Society of America, 38(7), Astracts with Programs, page 235, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014. 66. (Ea2006b) D. J. Easterbrook, 2006b, "Causes of abrupt global climate changes and global warming predictions for the coming century", Proceedings of the Geological Society of America, 38, 2006, Abstracts with Program, page 77, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014. 67. (Ea2007) D. J. Easterbrook, "Geologic evidence of recurring climate cycles and their implications for the cause of global warming and climate changes in the coming century", Proceedings of the Geological Society of America, 39(6), Abstracts with Programs, page 507, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014. 68. (Ea2008) D. J. Easterbrook, "Correlation of climatic and solar variations over the past 500 years and predicting global climate changes from recurring climate cycles", Proceedings of the International Geological Congress, 2008, Oslo, Norway. 69. (Wi2007) J. K. Willis, J. M. Lyman, G. C. Johnson, J. Gilson, "Correction to 'Recent cooling of the upper ocean"', Geophysical Research Letters, 34, L16601, 2007, http://dx.doi.org/10.1029/2007GL030323. 70. (Ra2006) N. Rayner, P. Brohan, D. Parker, C. Folland, J. Kennedy, M. Vanicek, T. Ansell, S. Tett, "Improved analyses of changes and uncertainties in sea surface temperature measured in situ since the mid-nineteenth century: the HadSST2 dataset", Journal of Climate, 19, 1 February 2006, http://dx.doi.org/10.1175/JCLI3637.1. 71. (Pi2006) R. Pielke, Sr, "The Lyman et al paper 'Recent cooling in the upper ocean' has been published", blog entry, September 29, 2006, 8:09 AM, https://pielkeclimatesci.wordpress.com/2006/09/29/the-lyman-et-al-paper-recent-cooling-in-the-upper-ocean-has-been-published/, last accessed 24 April 2014. 72. (Ko2013) Y. Kosaka, S.-P. Xie, "Recent global-warming hiatus tied to equatorial Pacific surface cooling", Nature, 501, 2013, 403–407, http://dx.doi.org/10.1038/nature12534. 73. (Ke1998) C. D. Keeling, "Rewards and penalties of monitoring the Earth", Annual Review of Energy and the Environment, 23, 1998, 25–82, http://dx.doi.org/10.1146/annurev.energy.23.1.25. 74. (Wa1990) G. Wahba, Spline Models for Observational Data, Society for Industrial and Applied Mathematics (SIAM), 1990. 75. (Go1979) G. H. Golub, M. Heath, G. Wahba, "Generalized cross-validation as a method for choosing a good ridge parameter", Technometrics, 21(2), May 1979, 215-223, http://www.stat.wisc.edu/~wahba/ftp1/oldie/golub.heath.wahba.pdf. 76. (Cr1979) P. Craven, G. Wahba, "Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation", Numerische Mathematik, 31, 1979, 377-403, http://www.stat.wisc.edu/~wahba/ftp1/oldie/craven.wah.pdf. 77. (Sa2013) S. Särkkä, Bayesian Filtering and Smoothing, Cambridge University Press, 2013. 78. (Co2009) P. S. P. Cowpertwait, A. V. Metcalfe, Introductory Time Series With R, Springer, 2009. 79. (Ko2005) R. Koenker, Quantile Regression, Cambridge University Press, 2005. 80. (Du2012) J. Durbin, S. J. Koopman, Time Series Analysis by State Space Methods, Oxford University Press, 2012. 81. Process variance. Here, the process variance was taken here to be $latex \frac{1}{50}$ of the observations variance. 82. Probabilities. "In this Report, the following terms have been used to indicate the assessed likelihood of an outcome or a result: Virtually certain 99-100% probability, Very likely 90-100%, Likely 66-100%, About as likely as not 33-66\$%, Unlikely 0-33%, Very unlikely 0-10%, Exceptionally unlikely 0-1%. Additional terms (Extremely likely: 95-100%, More likely than not 50-100%, and Extremely unlikely 0-5%) may also be used when appropriate. Assessed likelihood is typeset in italics, e.g., very likely (see Section 1.4 and Box TS.1 for more details)." 83. (Ki2013) E. Kintsch, "Researchers wary as DOE bids to build sixth U.S. climate model", Science 341 (6151), 13 September 2013, page 1160, http://dx.doi.org/10.1126/science.341.6151.1160. 84. Inez Fung. ""It's great there's a new initiative," says modeler Inez Fung of DOE's Lawrence Berkeley National Laboratory and the University of California, Berkeley. "But all the modeling efforts are very short-handed. More brains working on one set of code would be better than working separately"". 85. Exchangeability. Exchangeability is a weaker assumption than independence. Random variables are exchangeable if their joint distribution only depends upon the set of variables, and not their order (Di1977, Di1988, Ro2013c). Note the caution in Coolen. 86. (Di1977) P. Diaconis, "Finite forms of de Finetti's theorem on exchangeability", Synthese, 36, 1977, 271-281. 87. (Di1988) P. Diaconis, "Recent progress on de Finetti's notions of exchangeability", Bayesian Statistics, 3, 1988, 111-125. 88. (Ro2013c) J.C. Rougier, M. Goldstein, L. House, "Second-order exchangeability analysis for multi-model ensembles", Journal of the American Statistical Association, 108, 2013, 852-863, http://dx.doi.org/10.1080/01621459.2013.802963. 89. (Co2005) F. P. A. Coolen, "On nonparametric predictive inference and objective Bayesianism", Journal of Logic, Language and Information, 15, 2006, 21-47, http://dx.doi.org/10.1007/s10849-005-9005-7. ("Generally, though, both for frequentist and Bayesian approaches, statisticians are often happy to assume exchangeability at the prior stage. Once data are used in combination with model assumptions, exchangeability no longer holds ‘post-data’ due to the influence of modelling assumptions, which effectively are based on mostly subjective input added to the information from the data."). 90. (Ch2008) M. R. Chernick, Bootstrap Methods: A Guide for Practitioners and Researches, 2nd edition, 2008, John Wiley & Sons. 91. (Da2009) A. C. Davison, D. V. Hinkley, Bootstrap Methods and their Application, first published 1997, 11th printing, 2009, Cambridge University Press. 92. (Mu2007) M. Mudelsee, M. Alkio, "Quantifying effects in two-sample environmental experiments using bootstrap condidence intervals", Environmental Modelling and Software, 22, 2007, 84-96, http://dx.doi.org/10.1016/j.envsoft.2005.12.001. 93. (Wi2011) D. S. Wilks, Statistical Methods in the Atmospheric Sciences, 3rd edition, 2011, Academic Press. 94. (Pa2006) T. N. Palmer, R. Buizza, R. Hagedon, A. Lawrence, M. Leutbecher, L. Smith, "Ensemble prediction: A pedagogical perspective", ECMWF Newsletter, 106, 2006, 10–17. 95. (To2001) Z. Toth, Y. Zhu, T. Marchok, "The use of ensembles to identify forecasts with small and large uncertainty", Weather and Forecasting, 16, 2001, 463–477, http://dx.doi.org/10.1175/1520-0434(2001)0162.0.CO;2. 96. (Le2013a) L. A. Lee, K. J. Pringle, C. I. Reddington, G. W. Mann, P. Stier, D. V. Spracklen, J. R. Pierce, K. S. Carslaw, "The magnitude and causes of uncertainty in global model simulations of cloud condensation nuclei", Atmospheric Chemistry and Physics Discussion, 13, 2013, 6295-6378, http://www.atmos-chem-phys.net/13/9375/2013/acp-13-9375-2013.pdf. 97. (Gl2011) D. M. Glover, W. J. Jenkins, S. C. Doney, Modeling Methods for Marine Science, Cambridge University Press, 2011. 98. (Ki2014) E. Kintisch, "Climate outsider finds missing global warming", Science, 344 (6182), 25 April 2014, page 348, http://dx.doi.org/10.1126/science.344.6182.348. 99. (GL2011) D. M. Glover, W. J. Jenkins, S. C. Doney, Modeling Methods for Marine Science, Cambridge University Press, 2011, Chapter 7. 100. (Le2013b) L. A. Lee, "Uncertainties in climate models: Living with uncertainty in an uncertain world", Significance, 10(5), October 2013, 34-39, http://dx.doi.org/10.1111/j.1740-9713.2013.00697.x. 101. (Ur2014) N. M. Urban, P. B. Holden, N. R. Edwards, R. L. Sriver, K. Keller, "Historical and future learning about climate sensitivity", Geophysical Research Letters, 41, http://dx.doi.org/10.1002/2014GL059484. 102. (Th2005) P. W. Thorne, D. E. Parker, J. R. Christy, C. A. Mears, "Uncertainties in climate trends: Lessons from upper-air temperature records", Bulletin of the American Meteorological Society, 86, 2005, 1437-1442, http://dx.doi.org/10.1175/BAMS-86-10-1437. 103. (Fr2008) C. Fraley, A. E. Raftery, T. Gneiting, "Calibrating multimodel forecast ensembles with exchangeable and missing members using Bayesian model averaging", Monthly Weather Review. 138, January 2010, http://dx.doi.org/10.1175/2009MWR3046.1. 104. (Ow2001) A. B. Owen, Empirical Likelihood, Chapman & Hall/CRC, 2001. 105. (Al2012) M. Aldrin, M. Holden, P. Guttorp, R. B. Skeie, G. Myhre, T. K. Berntsen, "Bayesian estimation of climate sensitivity based on a simple climate model fitted to observations of hemispheric temperatures and global ocean heat content", Environmentrics, 2012, 23, 253-257, http://dx.doi.org/10.1002/env.2140. 106. (AS2007) "ASA Statement on Climate Change", American Statistical Association, ASA Board of Directors, adopted 30 November 2007, http://www.amstat.org/news/climatechange.cfm, last visited 13 September 2013. 107. (Be2008) L. M. Berliner, Y. Kim, "Bayesian design and analysis for superensemble-based climate forecasting", Journal of Climate, 21, 1 May 2008, http://dx.doi.org/10.1175/2007JCLI1619.1. 108. (Fe2011a) X. Feng, T. DelSole, P. Houser, "Bootstrap estimated seasonal potential predictability of global temperature and precipitation", Geophysical Research Letters, 38, L07702, 2011, http://dx.doi.org/10.1029/2010GL046511. 109. (Fr2013) P. Friedlingstein, M. Meinshausen, V. K. Arora, C. D. Jones, A. Anav, S. K. Liddicoat, R. Knutti, "Uncertainties in CMIP5 climate projections due to carbon cycle feedbacks", Journal of Climate, 2013, http://dx.doi.org/10.1175/JCLI-D-12-00579.1. 110. (Ho2003) T. J. Hoar, R. F. Milliff, D. Nychka, C. K. Wikle, L. M. Berliner, "Winds from a Bayesian hierarchical model: Computations for atmosphere-ocean research", Journal of Computational and Graphical Statistics, 12(4), 2003, 781-807, http://www.jstor.org/stable/1390978. 111. (Jo2013) V. E. Johnson, "Revised standards for statistical evidence", Proceedings of the National Academy of Sciences, 11 November 2013, http://dx.doi.org/10.1073/pnas.1313476110, published online before print. 112. (Ka2013b) J. Karlsson, J., Svensson, "Consequences of poor representation of Arctic sea-ice albedo and cloud-radiation interactions in the CMIP5 model ensemble", Geophysical Research Letters, 40, 2013, 4374-4379, http://dx.doi.org/10.1002/grl.50768. 113. (Kh2002) V. V. Kharin, F. W. Zwiers, "Climate predictions with multimodel ensembles", Journal of Climate, 15, 1 April 2002, 793-799. 114. (Kr2011) J. K. Kruschke, Doing Bayesian Data Analysis: A Tutorial with R and BUGS, Academic Press, 2011. 115. (Li2008) X. R. Li, X.-B. Li, "Common fallacies in hypothesis testing", Proceedings of the 11th IEEE International Conference on Information Fusion, 2008, New Orleans, LA. 116. (Li2013) J.-L. F. Li, D. E. Waliser, G. Stephens, S. Lee, T. L’Ecuyer, S. Kato, N. Loeb, H.-Y. Ma, "Characterizing and understanding radiation budget biases in CMIP3/CMIP5 GCMs, contemporary GCM, and reanalysis", Journal of Geophysical Research: Atmospheres, 118, 2013, 8166-8184, http://dx.doi.org/10.1002/jgrd.50378. 117. (Ma2013b) E. Maloney, S. Camargo, E. Chang, B. Colle, R. Fu, K. Geil, Q. Hu, x. Jiang, N. Johnson, K. Karnauskas, J. Kinter, B. Kirtman, S. Kumar, B. Langenbrunner, K. Lombardo, L. Long, A. Mariotti, J. Meyerson, K. Mo, D. Neelin, Z. Pan, R. Seager, Y. Serra, A. Seth, J. Sheffield, J. Stroeve, J. Thibeault, S. Xie, C. Wang, B. Wyman, and M. Zhao, "North American Climate in CMIP5 Experiments: Part III: Assessment of 21st Century Projections", Journal of Climate, 2013, in press, http://dx.doi.org/10.1175/JCLI-D-13-00273.1. 118. (Mi2007) S.-K. Min, D. Simonis, A. Hense, "Probabilistic climate change predictions applying Bayesian model averaging", Philosophical Transactions of the Royal Society, Series A, 365, 15 August 2007, http://dx.doi.org/10.1098/rsta.2007.2070. 119. (Ni2001) N. Nicholls, "The insignificance of significance testing", Bulletin of the American Meteorological Society, 82, 2001, 971-986. 120. (Pe2008) G. Pennello, L. Thompson, "Experience with reviewing Bayesian medical device trials", Journal of Biopharmaceutical Statistics, 18(1), 81-115). 121. (Pl2013) M. Plummer, "Just Another Gibbs Sampler", JAGS, 2013. Plummer describes this in greater detail at "JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling", Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), 20-22 March 2003, Vienna. See also M. J. Denwood, (in review) "runjags: An R package providing interface utilities, parallel computing methods and additional distributions for MCMC models in JAGS", Journal of Statistical Software, and http://cran.r-project.org/web/packages/runjags/. See also J. Kruschke, "Another reason to use JAGS instead of BUGS", http://doingbayesiandataanalysis.blogspot.com/2012/12/another-reason-to-use-jags-instead-of.html, 21 December 2012. 122. (Po1994) D. N. Politis, J. P. Romano, "The Stationary Bootstrap", Journal of the American Statistical Association, 89(428), 1994, 1303-1313, http://dx.doi.org/10.1080/01621459.1994.10476870. 123. (Sa2002) C.-E. Särndal, B. Swensson, J. Wretman, Model Assisted Survey Sampling, Springer, 1992. 124. (Ta2012) K. E. Taylor, R.J. Stouffer, G.A. Meehl, "An overview of CMIP5 and the experiment design", Bulletin of the American Meteorological Society, 93, 2012, 485-498, http://dx.doi.org/10.1175/BAMS-D-11-00094.1. 125. (To2013) A. Toreti, P. Naveau, M. Zampieri, A. Schindler, E. Scoccimarro, E. Xoplaki, H. A. Dijkstra, S. Gualdi, J, Luterbacher, "Projections of global changes in precipitation extremes from CMIP5 models", Geophysical Research Letters, 2013, http://dx.doi.org/10.1002/grl.50940. 126. (WC2013) World Climate Research Programme (WCRP), "CMIP5: Coupled Model Intercomparison Project", http://cmip-pcmdi.llnl.gov/cmip5/, last visited 13 September 2013. 127. (We2011) M. B. Westover, K. D. Westover, M. T. Bianchi, "Significance testing as perverse probabilistic reasoning", BMC Medicine, 9(20), 2011, http://www.biomedcentral.com/1741-7015/9/20. 128. (Zw2004) F. W. Zwiers, H. Von Storch, "On the role of statistics in climate research", International Journal of Climatology, 24, 2004, 665-680. 129. (Ra2005) A. E. Raftery, T. Gneiting , F. Balabdaoui , M. Polakowski, "Using Bayesian model averaging to calibrate forecast ensembles", Monthly Weather Review, 133, 1155–1174, http://dx.doi.org/10.1175/MWR2906.1. 130. (Ki2010) G. Kitagawa, Introduction to Time Series Modeling, Chapman & Hall/CRC, 2010. 131. (Hu2010) C. W. Hughes, S. D. P. Williams, "The color of sea level: Importance of spatial variations in spectral shape for assessing the significance of trends", Journal of Geophysical Research, 115, C10048, 2010, http://dx.doi.org/10.1029/2010JC006102. category: blog
2021-07-24 04:41:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5564577579498291, "perplexity": 8122.579174553112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00537.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?p=239510
## w Boltzmann Equation for Entropy: $S = k_{B} \ln W$ Sophia Dinh 1D Posts: 100 Joined: Thu Jul 25, 2019 12:15 am ### w do you need Avogadro's number to find degeneracy? Indy Bui 1l Posts: 99 Joined: Sat Sep 07, 2019 12:19 am ### Re: w I think this is because we want to find one mole's worth, which is avagodros number in terms of atoms or molecules. Jonathan Haimowitz 3B Posts: 42 Joined: Wed Nov 18, 2020 12:28 am Been upvoted: 2 times ### Re: w You need to multiply N by Avogadro's number to calculate the degeneracy of N moles of a certain substance, but not to calculate that of N particles. Ava Nickman Posts: 100 Joined: Wed Sep 30, 2020 9:54 pm ### Re: w yes you do Kaley Qin 1F Posts: 92 Joined: Wed Sep 30, 2020 9:54 pm ### Re: w No, you don't need Avogadro's number to find degeneracy. You will only use Avogadro's number for degeneracy if you are finding the degeneracy of a mole of molecules. Jiwon_Chae_3L Posts: 88 Joined: Wed Sep 30, 2020 9:39 pm ### Re: w You don't necessarily need to use Avogadro's constant to find degeneracy but if a question requires you to know the degeneracy of a mole of a substance, Avogadro's constant is needed for the value of the "number of particles" within the states. Abhinav Behl 3G Posts: 40 Joined: Wed Nov 18, 2020 12:25 am ### Re: w You would only need to apply Avogadro's number if the question asks you to find the degeneracy of a mole of the given substance. Emma_Barrall_3J Posts: 90 Joined: Wed Sep 30, 2020 9:55 pm ### Re: w Only if you are looking in the context of per mole. Posts: 40 Joined: Thu Dec 17, 2020 12:18 am ### Re: w We do need to use Avogadro's number if we want to find the degeneracy of N moles of a certain substance. However, if you are simply calculating the degeneracy of a certain number of particles, you do not use Avogadro's number. Ayesha Aslam-Mir 3C Posts: 115 Joined: Wed Sep 30, 2020 9:43 pm ### Re: w Kaley Qin 1F wrote:No, you don't need Avogadro's number to find degeneracy. You will only use Avogadro's number for degeneracy if you are finding the degeneracy of a mole of molecules. Well said here! Just like with the first law of thermodynamics, we need to consider context/all factores, here whether we are looking for degeneracy per mole or just amongst particles-- if you need to figure it out per mole, you'll need Avogadro's number. ☎️☎️☎️ Brandon Le 3C Posts: 89 Joined: Wed Sep 30, 2020 9:49 pm ### Re: w You would only need Avogadro's number if you were to find the degeneracy of a specific number of moles of a molecule, much like how you would only use Avogadro's number to find the mass of only one atom of an element when given the molar mass. jessicasilverstein1F Posts: 122 Joined: Wed Sep 30, 2020 9:57 pm ### Re: w Yes, if looking in terms of per mol Akriti Ratti 1H Posts: 90 Joined: Wed Sep 30, 2020 9:36 pm ### Re: w Yes, if you're working with moles. Return to “Third Law of Thermodynamics (For a Unique Ground State (W=1): S -> 0 as T -> 0) and Calculations Using Boltzmann Equation for Entropy” ### Who is online Users browsing this forum: No registered users and 1 guest
2021-03-01 19:20:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5454525947570801, "perplexity": 3133.448653725973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362899.14/warc/CC-MAIN-20210301182445-20210301212445-00002.warc.gz"}
http://nrich.maths.org/public/leg.php?code=5039&cl=3&cldcmpid=2669
Search by Topic Resources tagged with Interactivities similar to Stars: Filter by: Content type: Stage: Challenge level: There are 154 results Broad Topics > Information and Communications Technology > Interactivities Stars Stage: 3 Challenge Level: Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are hit? Cosy Corner Stage: 3 Challenge Level: Six balls of various colours are randomly shaken into a trianglular arrangement. What is the probability of having at least one red in the corner? Number Pyramids Stage: 3 Challenge Level: Try entering different sets of numbers in the number pyramids. How does the total at the top change? Two's Company Stage: 3 Challenge Level: 7 balls are shaken in a container. You win if the two blue balls touch. What is the probability of winning? Cops and Robbers Stage: 2 and 3 Challenge Level: Can you find a reliable strategy for choosing coordinates that will locate the robber in the minimum number of guesses? Partitioning Revisited Stage: 3 Challenge Level: We can show that (x + 1)² = x² + 2x + 1 by considering the area of an (x + 1) by (x + 1) square. Show in a similar way that (x + 2)² = x² + 4x + 4 Square Coordinates Stage: 3 Challenge Level: A tilted square is a square with no horizontal sides. Can you devise a general instruction for the construction of a square when you are given just one of its sides? Semi-regular Tessellations Stage: 3 Challenge Level: Semi-regular tessellations combine two or more different regular polygons to fill the plane. Can you find all the semi-regular tessellations? Got It Stage: 2 and 3 Challenge Level: A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. Cogs Stage: 3 Challenge Level: A and B are two interlocking cogwheels having p teeth and q teeth respectively. One tooth on B is painted red. Find the values of p and q for which the red tooth on B contacts every gap on the. . . . See the Light Stage: 2 and 3 Challenge Level: Work out how to light up the single light. What's the rule? Multiplication Tables - Matching Cards Stage: 1, 2 and 3 Challenge Level: Interactive game. Set your own level of challenge, practise your table skills and beat your previous best score. Volume of a Pyramid and a Cone Stage: 3 These formulae are often quoted, but rarely proved. In this article, we derive the formulae for the volumes of a square-based pyramid and a cone, using relatively simple mathematical concepts. Konigsberg Plus Stage: 3 Challenge Level: Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges. Picturing Triangle Numbers Stage: 3 Challenge Level: Triangle numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers? Tilted Squares Stage: 3 Challenge Level: It's easy to work out the areas of most squares that we meet, but what if they were tilted? Subtended Angles Stage: 3 Challenge Level: What is the relationship between the angle at the centre and the angles at the circumference, for angles which stand on the same arc? Can you prove it? Shear Magic Stage: 3 Challenge Level: What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles? Balancing 2 Stage: 3 Challenge Level: Meg and Mo still need to hang their marbles so that they balance, but this time the constraints are different. Use the interactivity to experiment and find out what they need to do. Power Crazy Stage: 3 Challenge Level: What can you say about the values of n that make $7^n + 3^n$ a multiple of 10? Are there other pairs of integers between 1 and 10 which have similar properties? Isosceles Triangles Stage: 3 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? Lost Stage: 3 Challenge Level: Can you locate the lost giraffe? Input coordinates to help you search and find the giraffe in the fewest guesses. Got it Article Stage: 2 and 3 This article gives you a few ideas for understanding the Got It! game and how you might find a winning strategy. Online Stage: 2 and 3 Challenge Level: A game for 2 players that can be played online. Players take it in turns to select a word from the 9 words given. The aim is to select all the occurrences of the same letter. Diamond Mine Stage: 3 Challenge Level: Practise your diamond mining skills and your x,y coordination in this homage to Pacman. Fifteen Stage: 2 and 3 Challenge Level: Can you spot the similarities between this game and other games you know? The aim is to choose 3 numbers that total 15. Shuffles Tutorials Stage: 3 Challenge Level: Learn how to use the Shuffles interactivity by running through these tutorial demonstrations. Factors and Multiples - Secondary Resources Stage: 3 and 4 Challenge Level: A collection of resources to support work on Factors and Multiples at Secondary level. Top Coach Stage: 3 Challenge Level: Carry out some time trials and gather some data to help you decide on the best training regime for your rowing crew. Balancing 3 Stage: 3 Challenge Level: Mo has left, but Meg is still experimenting. Use the interactivity to help you find out how she can alter her pouch of marbles and still keep the two pouches balanced. Archery Stage: 3 Challenge Level: Imagine picking up a bow and some arrows and attempting to hit the target a few times. Can you work out the settings for the sight that give you the best chance of gaining a high score? Flip Flop - Matching Cards Stage: 1, 2 and 3 Challenge Level: A game for 1 person to play on screen. Practise your number bonds whilst improving your memory Balancing 1 Stage: 3 Challenge Level: Meg and Mo need to hang their marbles so that they balance. Use the interactivity to experiment and find out what they need to do. Drips Stage: 2 and 3 Challenge Level: An animation that helps you understand the game of Nim. First Connect Three for Two Stage: 2 and 3 Challenge Level: First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line. Countdown Stage: 2 and 3 Challenge Level: Here is a chance to play a version of the classic Countdown Game. Factors and Multiples Game Stage: 2, 3 and 4 Challenge Level: A game in which players take it in turns to choose a number. Can you block your opponent? Square It Stage: 1, 2, 3 and 4 Challenge Level: Players take it in turns to choose a dot on the grid. The winner is the first to have four dots that can be joined to form a square. Magic Potting Sheds Stage: 3 Challenge Level: Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? Triangles in Circles Stage: 3 Challenge Level: How many different triangles can you make which consist of the centre point and two of the points on the edge? Can you work out each of their angles? More Magic Potting Sheds Stage: 3 Challenge Level: The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? Disappearing Square Stage: 3 Challenge Level: Do you know how to find the area of a triangle? You can count the squares. What happens if we turn the triangle on end? Press the button and see. Try counting the number of units in the triangle now. . . . Diagonal Dodge Stage: 2 and 3 Challenge Level: A game for 2 players. Can be played online. One player has 1 red counter, the other has 4 blue. The red counter needs to reach the other side, and the blue needs to trap the red. Poly-puzzle Stage: 3 Challenge Level: This rectangle is cut into five pieces which fit exactly into a triangular outline and also into a square outline where the triangle, the rectangle and the square have equal areas. Nim-interactive Stage: 3 and 4 Challenge Level: Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The winner is the player to take the last counter. Which Spinners? Stage: 3 and 4 Challenge Level: Can you work out which spinners were used to generate the frequency charts? Muggles Magic Stage: 3 Challenge Level: You can move the 4 pieces of the jigsaw and fit them into both outlines. Explain what has happened to the missing one unit of area. Square it for Two Stage: 1, 2, 3 and 4 Challenge Level: Square It game for an adult and child. Can you come up with a way of always winning this game? A Tilted Square Stage: 4 Challenge Level: The opposite vertices of a square have coordinates (a,b) and (c,d). What are the coordinates of the other vertices? Tilting Triangles Stage: 4 Challenge Level: A right-angled isosceles triangle is rotated about the centre point of a square. What can you say about the area of the part of the square covered by the triangle as it rotates?
2016-10-01 03:17:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2915663719177246, "perplexity": 1418.278781574843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662507.79/warc/CC-MAIN-20160924173742-00158-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.vedantu.com/question-answer/three-taps-a-b-c-fill-up-a-tank-independently-in-class-9-maths-cbse-5edf380911ac812938711cc5
Question # Three taps A, B, C fill up a tank independently in 10 hr, 20 hr, 30 hr respectively. Initially, the tank is empty and exactly one pair of taps is open during each hour and every pair of taps is open at least for an hour. What is the minimum number of hours required to fill the tank?(a) 8(b) 9(c) 10(d) 11 Verified 148.5k+ views Hint: Find the capacity of the tank filled by each pair of taps AB, BC and CA in 1 hour and then use the pair of taps which fills the highest capacity of tank in 1 hour to fill the remaining tank. Given that three taps A, B, C fill up the tank independently in 10 hr, 20 hr, 30 hr respectively. We have to find the minimum number of hours required to fill the tank if each pair of taps is open for at least an hour. Since 60 is LCM of 10, 20 and 30, therefore let us take the total capacity of the tank as L = 60 liters. So, we are given that A can fill the full tank in 10 hours. That is, in 10 hours, A can fill 60 liters. So, in 1 hour, A can fill $\dfrac{60}{10}=6\text{ liters}$ Similarly, in 20 hours B can fill 60 liters. So, in 1 hour, B can fill $\dfrac{60}{20}=3\text{ liters}$ Similarly, in 30 hours C can fill 60 liters. So, in 1 hour, C can fill $\dfrac{60}{30}=2\text{ liters}$ Therefore, in 1 hour, a pair of A and B can fill = 6 + 3 = 9 liters. Similarly, in 1 hour, pair of B and C can fill = 3 + 2 = 5 liters Similarly, in 1 hour, pair of C and A can fill = 6 + 2 = 8 liters It is given in the question that each pair is open for at least an hour. Therefore, if we open AB for first hour, BC for second hour and CA for third hour, Then in total 3 hours, they can fill = 9 + 5 + 8 = 22 liters. Since we know that total capacity is 60 liters, we still have 60 – 22 = 38 liters to be filled. It is given in question that the tank must be filled in minimum hours. We have already found that the pair AB fills the highest capacity of the tank in 1 hour. This capacity is 9 liters. Therefore, we will use the pair AB to fill the remaining part of the tank. This remaining part of the tank, as we have found already is 38 liters. Let us consider that AB takes x hours to fill the remaining tank, Then, we get 9x = 38 liters. Or, $x=\dfrac{38}{9}\text{ hours}$ $x=4.22\text{ hours}$ Therefore, total time taken to fill the tank \begin{align} & =3+4.22 \\ & =7.22\text{ hours} \\ \end{align} As the tank gets filled in the 8th hour, therefore option (a) is correct. Note: In these types of questions, students take the total capacity of the tank to be x and get confused with fractions. So it is advisable to take the total capacity of the tank to be a number preferably the one which is the LCM of the given hours so that, easily it gets cancelled. Also, students must remember that each pair of taps must get opened for at least 1 hour.
2021-10-23 08:52:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8952459692955017, "perplexity": 563.0667358028466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00286.warc.gz"}
http://www.moneyscience.com/pg/blog/arXiv/read/838784/an-optimal-extraction-problem-with-price-impact-arxiv181201270v1-mathoc
Remember me ## An Optimal Extraction Problem with Price Impact. (arXiv:1812.01270v1 [math.OC]) Tue, 04 Dec 2018 19:48:46 GMT A price-maker company extracts an exhaustible commodity from a reservoir, and sells it instantaneously in the spot market. In absence of any actions of the company, the commodity's spot price evolves either as a drifted Brownian motion or as an Ornstein-Uhlenbeck process. While extracting, the company affects the market price of the commodity, and its actions have an impact on the dynamics of the commodity's spot price. The company aims at maximizing the total expected profits from selling the commodity, net of the total expected proportional costs of extraction. We model this problem as a two-dimensional degenerate singular stochastic control problem with finite fuel. To determine its solution, we construct an explicit solution to the associated Hamilton-Jacobi-Bellman equation, and then verify its actual optimality through a verification theorem. On the one hand, when the (uncontrolled) price is a drifted Brownian motion, it is optimal to extract whenever the current price level is larger or equal than an endogenously determined constant threshold. On the other hand, when the (uncontrolled) price evolves as an Ornstein-Uhlenbeck process, we show that the optimal extraction rule is triggered by a curve depending on the current level of the reservoir. Such a curve is a strictly decreasing $C^{\infty}$-function for which we are able to provide an explicit expression. Finally, our study is complemented by a theoretical and numerical analysis of the dependency of the optimal extraction strategy and value function on the model's parameters.
2019-04-24 04:03:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5681363344192505, "perplexity": 1822.2449593233368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578626296.62/warc/CC-MAIN-20190424034609-20190424060609-00475.warc.gz"}
https://answers.ros.org/question/232276/dwaplannerros-making-spiral-trajectory-while-following-global-path/
# DWAPlannerROS making spiral trajectory while following global path Hi all, I'm trying to configure dwa planner for a custom skid drive platform, but I'm unable to get a good behaviour of the planner. As it tries to follow the global path, the planner always chooses an almost circular trajectory, alternating forward and backward movements. As result, the platform ends following the path in kind of a spiral trajectory. This behaviour can be observed in the following video. My initial guess was that it's trying to get as fast as possible to the path, so I've tried to reduce path_distance_bias while increasing goal_distance_bias. My hope was that it would tend more towards the goal and less to the path, thus, avoiding turning so hard. However, this change didn't show any improvement. We need DWA, as TrajectoryPlannerROS is not able to make backwards trajectories and has a hard time achieving goals with our required precision. Any help will be appreciated. I'm using ROS Hydro on ubuntu 12.04. Relevant configuration files: DWA params: DWAPlannerROS: acc_lim_x: 2.5 acc_lim_y: 0.0 # Not holonomic == 0 acc_lim_th: 3.2 max_trans_vel: 0.55 min_trans_vel: 0.1 max_vel_x: 0.55 min_vel_x: -0.55 max_vel_y: 0.0 # Not holonomic == 0 min_vel_y: 0.0 # Not holonomic == 0 max_rot_vel: 1.0 min_rot_vel: 0.3 # Goal Tolerance Parameters yaw_goal_tolerance: 0.05 xy_goal_tolerance: 0.10 latch_xy_goal_tolerance: false # Forward Simulation Parameters sim_time: 1.7 sim_granularity: 0.025 vx_samples: 3 vy_samples: 0 # Not holonomic vtheta_samples: 20 controller_frequency: 20.0 penalize_negative_x: false # Trajectory Scoring Parameters path_distance_bias: 0.1 #32.0 goal_distance_bias: 50 #24.0 occdist_scale: 0.1 #0.01 forward_point_distance: 0.325 stop_time_buffer: 0.2 scaling_speed: 0.25 max_scaling_factor: 0.2 # Oscillation Prevention Parameters oscillation_reset_dist: 0.05 # Global Plan Parameters prune_plan: true move_base params base_global_planner: navfn/NavfnROS base_local_planner: dwa_local_planner/DWAPlannerROS recovery_behaviors: [{name: conservative_reset, type: clear_costmap_recovery/ClearCostmapRecovery}, {name: rotate_recovery, type: rotate_recovery/RotateRecovery}, {name: aggressive_reset, type: clear_costmap_recovery/ClearCostmapRecovery}] controller_frequency: 10.0 planner_patience: 5.0 controller_patience: 15.0 conservative_reset_dist: 3.0 recovery_behavior_enabled: false clearing_rotation_allowed: false shutdown_costmaps: false oscillation_timeout: 0.0 oscillation_distance: 0.5 planner_frequency: 0.5 [EDIT] It seems that my problem was similar to the one reported in this other question. The bypass proposed there (increasing acc_lim to some very high values) also worked for me. I also noticed that acc_lim_th seems to be ignored. acc_lim_theta is the correct paramether. It looks like the wiki documentation is outdated. edit retag close merge delete I'm using Theta Acceleration limit as 2x higher of X Acceleration limit. And Decreasing sim_time is little bit helping to increasing performance. Also decreasing sim_time to 1.0 helps in some short distance rotational cases. Actually I tested different sim_time values ranging from 1.0 up to 3.0 seconds. A longer sim_time seemed to alleviate a bit the reported behaviour, but was still present very frequently. Will test your suggestion for 2x theta acceleration. Thank you!
2022-08-08 14:17:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3410133123397827, "perplexity": 10778.734700498215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00567.warc.gz"}
https://www.collabra.org/articles/10.1525/collabra.53/
A- A+ Alt. Display # Predicting Search Performance in Heterogeneous Visual Search Scenes with Real-World Objects ## Abstract Previous work in our lab has demonstrated that efficient visual search with a fixed target has a reaction time by set size function that is best characterized by logarithmic curves. Further, the steepness of these logarithmic curves is determined by the similarity between target and distractor items (Buetti et al., 2016). A theoretical account of these findings was proposed, namely that a parallel, unlimited capacity, exhaustive processing architecture is underlying such data. Here, we conducted two experiments to expand these findings to a set of real-world stimuli, in both homogeneous and heterogeneous search displays. We used computational simulations of this architecture to identify a way to predict RT performance in heterogeneous search using parameters estimated from homogeneous search data. Further, by examining the systematic deviation from our predictions in the observed data, we found evidence that early visual processing for individual items is not independent. Instead, items in homogeneous displays seemed to facilitate each other’s processing by a multiplicative factor. These results challenge previous accounts of heterogeneity effects in visual search, and demonstrate the explanatory and predictive power of an approach that combines computational simulations and behavioral data to better understand performance in visual search. ##### Subject: Psychology Keywords: How to Cite: Wang, Z., Buetti, S., & Lleras, A. (2017). Predicting Search Performance in Heterogeneous Visual Search Scenes with Real-World Objects. Collabra: Psychology, 3(1), 6. DOI: http://doi.org/10.1525/collabra.53 Published on 10 Mar 2017 Accepted on 19 Nov 2016            Submitted on 22 Aug 2016 ## Introduction ### Parallel processing in visual search Starting from the retina, early stages of the human visual system are organized in a parallel architecture, so that low-level information is extracted and represented simultaneously for a wide view of the world (Breitmeyer, 1992). On the other hand, there are several central bottlenecks limiting the amount of information that the mind can actively maintain, process and respond to. Those bottlenecks are exemplified in phenomena like the limited capacity of visual working memory (Sperling, 1960; Luck & Vogel, 1997; Awh, Barton & Vogel, 2007), the psychological refractory period (Pashler, 1992; Sigman & Dehaene, 2008), and the attentional blink (Shapiro, Raymond & Arnell, 1997; Vogel, Luck & Shapiro, 1998). The needs to both have parallel access to basic visual information around us and to focus high-level processing on select information are key constraints in the study of visual attention. Because the mind is almost always motivated by a specific goal, understanding of goal-directed visual processing is thus essential to understanding visual attention. By presenting a target object among various distracting items, visual search provides a convenient method to capture the goal-directed selection process of attention and therefore has been widely used and studied in the visual attention research. Notably, much of the effort in the visual search literature has been devoted to understanding focused attention, a capacity-limited form of visual attention where items or subsets of items in the display are serially processed. Plenty of empirical research has thus focused on the dependent variable of search slope—how much longer on average it takes for the visual system to process an additional item, and tried to establish relationships between different task settings and corresponding changes in search slopes. These task setting variables include how many features define the target (Treisman & Gelade, 1980), whether the target is defined by known features (Bravo & Nakayama, 1992), the similarity between target and distractors (Duncan & Humphreys, 1989), or what specific features are used to differentiate target from distractors (Wolfe & Horowitz, 2004). When this kind of relationship is successfully mapped out, inferences can be drawn about the nature or function of focused attention. For example, a key question in the history of visual search research had been to examine under what conditions a search slope becomes non-zero, which was thought to reflect the limit of parallel processing in the visual system. The fact that conjunction search produced non-zero search slopes while feature search did not lead to the suggestion that focused attention is necessary for the binding of different features onto an object file (Treisman & Gelade, 1980). This approach assumes a linear relationship between reaction time and number of items (or ‘set size’), which is often observed and is consistent with a capacity limitation in high-level processing. As a consequence, many prominent theories of visual search (and by extension, of visual attention) have been essentially accounts of the search slope given a specific search task (e.g., Duncan & Humphreys, 1989; Wolfe, 1994). Although alternative focus on the accuracy of responses has also generated important insights such as the signal detection theory of visual search (Verghese, 2001), the traditional method of studying reaction time as a function of set size has become one of the most popular approaches of attention research. Perhaps partly because of this tradition, cognitive experimental research on visual search has become somewhat disinterested in understanding parallel processing in visual search. The specific reason for this disinterest might be because of the assumption that parallel processing is synonymous with ‘flat’ search functions. This follows from two observations: first, when the linear regression of RT by set size returned a slope coefficient that’s close to zero (or smaller than 10ms/item), the search function is typically assumed to be a flat straight line; second, parallel processing with unlimited capacity was assumed to produce no (meaningful) additional time cost as additional items are introduced to a display. Therefore, when search slope is found to be near zero, the usual inference is that items are processed in parallel and that there is no need for attentional selection. Thus, the data producing that pattern were considered to be not informative for understanding visual attention. Yet, as discussed below, recent findings indicate that neither of these assumptions necessarily hold (Buetti, Cronin, Madison, Wang & Lleras, 2016). Nevertheless, major theories of visual search assume that parallel visual processing produces no meaningful variability in reaction time. For example, Guided Search (Wolfe, 1994) used a fixed 400 ms constant for the time cost to process the search display, compute the ‘priority map’ (and prepare and execute a response to a target, once found). Bundesen’s Theory of Visual Attention (1990) also had a mathematically explicit goal to make search time independent of set size in so-called pop-out searches, which were thought to depend entirely on parallel processing. Our knowledge about the parallel processing of visual scenes has largely come from computationally-oriented approaches to vision, in which the central goal is to predict the series of loci of attention or eye fixations given a specific scene or image. By the success of these computational models of attention in predicting human fixations, one can argue that low level, parallel computations carried out by these models mimic the parallel processing of human vision. For example, the work by Itti and Koch (2000; 2001) suggested that bottom-up, image-based control of attention can be formalized by the computation of a ‘saliency map’, a sum of various feature contrasts over different spatial scales. This model enjoys some degree of success in free-viewing tasks, particularly with complex scenes. Other saliency models operate on ideas such as self-information (Bruce & Tsotsos, 2009), Bayesian surprise (Itti & Baldi, 2009), and figure-ground segmentation (Zhang & Sclaroff, 2013), which can be seen as alternative accounts of the computational basis of bottom-up attention. One advantage of these models is that they are highly specific and allow for testable predictions regarding human performance, but the majority of these predictions are focused on measures like fixation distribution. The downside of many recent computational models of parallel vision is that their increased complexity does not allow for a clear understanding of the underlying mechanisms. For example, the currently top 2 ranking (using the AUC-Judd metric) saliency models (Kruthiventi, Ayush, & Babu, 2015; Vig, Dorr, & Cox, 2014) on the MIT Saliency Benchmark (Bylinskii, Judd, Durand, Oliva, & Torralba, 2014) are both based on deep neural networks. This approach is based on learning hierarchical features represented by multiple layers of neurons. However, there has been little effort to understand the correspondence between these learned features and actual representations in the human visual system. Thus, while these models describe the computations carried out in early vision, many of them cannot be directly related to visual search behavior. Important exceptions should be noted, such as Zelinsky’s Target Acquisition Model (2008), which was developed to predict scan path behavior in target present visual search based on a mechanistic model, and Najemnik & Geisler (2008), whose ideal-observer model reveals important deficiencies in common theories of visual attention. Both provide more specific predictions in terms of sequences of fixations (and thus saccades) rather than simply fixation distributions. Still, none of these models include estimates of visual processing times in humans, which complicates detailed comparisons to human performance. Lastly, Rosenholtz, Huang, Raj, Balas, and Ilie (2012) proposed a texture tiling theory of crowding in peripheral vision that considers search efficiency to be a function of summary visual statistics over peripheral pooling regions that aggregate low level visual information. However, here too, no differentiation was made between visual processing times and focused attention decision times, and an explicit mechanism is lacking to predict actual processing time given those summary statistics. Consequently, important aspects of the influence of parallel processing in visual search are largely unchartered. There are several reasons why understanding this processing stage is important. A priori, selective attention evolved to address the need to optimally bridge the gap of processing capacity between early parallel visual processing and higher level processing, therefore understanding what information the parallel stage can process naturally provides boundaries to what the attentive, limited capacity stage needs to do and/or compute. More importantly, the often-implied assumption that parallel, unlimited capacity processing results on constant processing times simply does not hold: Townsend and Ashby (1983), for instance, provided a precise mathematical formulation of a variety of such processing models, many of which predict non-flat RT by set size functions. The counter-intuitiveness of these results can be perhaps dispelled if one considers that unlimited capacity (i.e., the term referring to the fact that information is processed simultaneously at various spatial locations/channels, independently of the number of locations/channels) should not be and is not synonymous with infinite capacity (i.e., that there are no limitations to the processing capacity at any one location). We propose then that developing a better understanding of early parallel processing ought to be very informative to attention research. Empirically, there are various experimental results indicating that the visual system can rapidly access substantial amounts of information without focused attention, such as scene gist (Potter & Levy, 1969; Potter, 1976; Schyns & Oliva, 1994; Oliva, 2005), statistical properties in a scene (e.g., Parkes, Lund, Angelucci, Solomon & Morgan, 2001; Chong & Treisman, 2005a, 2005b; Haberman & Whitney, 2009), and some basic categorical information of objects (Li, VanRullen, Koch, & Perona, 2002; Li, Iyer, Koch & Perona, 2007). Such processing power must be based on this parallel processing stage, of which relatively little has been learned. Additionally, current theories often fail to account for search performance variability in real world scenes (e.g. Itti & Koch, 2000; Wolfe, Alvarez, Rosenholtz, Kuzmova & Sherman, 2011), which could be at least partly due to neglecting the processing variability arising from the parallel processing stage. ### Systematic variability in efficient search Recent work in our lab demonstrated an important reaction time signature of the parallel processing stage in fixed-target, efficient visual search (Buetti, Cronin, Madison, Wang & Lleras, 2016). Our results showed that in addition to a linear increase in reaction time caused by distractor items highly similar to the target, less similar items can produce a logarithmic increase in reaction time as set size increases. This logarithmic function can be easily overlooked if one does not sample the set size conditions appropriately and simply make a linear regression to the data. Figure 1 illustrates key aspects of our results. These two different signatures in reaction time lead us to propose a distinction between two types of visual distractors: candidates and lures. Candidates are items that require focused spatial attention to be distinguished from the target because they share too many visual characteristics with the target (such as color, curvature, line intersection, orientation). As a result, given the known representational limitations of peripheral vision, human observers cannot discriminate candidates from the target in parallel in the peripheral field of view. In contrast, lures are items that are sufficiently different from the target along some set of visual features that they do not require close scrutiny. That is, the resolution of peripheral viewing is sufficient for determining that lure items are not the target. Take for example the case of looking for a watering can in your garden. Close scrutiny is likely not required to decide that fence, trees, flowers, grass, and large lawn furniture are not a watering can. You can, therefore, discard all such objects as unlikely targets in parallel, and we would refer to them as lures, in this particular example. Other medium sized objects of similar size, color, and material (maybe some children toys) might be confusable with the watering can in peripheral vision. We would refer to those objects as candidates and those candidates would require focused attention to be differentiated from the target. Figure 1 Key findings demonstrating logarithmic RT by set size functions from Buetti et al. (2016). Panel A: Data from Experiment 3A of Buetti et al. (2016). The task was to find a ‘T’ target among ‘L’ candidates and thick orange cross lures. The data are best described as a logarithmic function of total set size when the number of candidates is held constant. Notice that the two curves for two different candidate set sizes are highly parallel, suggesting that candidates introduce a linear increase in reaction time. Panel B: Data from Experiment 1A of Buetti et al. (2016). Reaction times to find a red triangle target among different types of lures are best fit by logarithmic functions, whose steepness or ‘logarithmic slopes’ are modulated by the similarity between lure and target. Returning to lures, lures are sufficiently different from the target that they can be processed in parallel, across the visual scene and, with a high degree of success, they can be ruled out as non-targets. When candidates and lures are both present in a scene, one can dissociate the linear and logarithmic RT contributions to overall RT that each bring about (see Figure 1A). Furthermore, we also demonstrated that different types of lures produce logarithmic RT by set size functions of different steepness, depending on their visual similarity to the target, such that lures that are more similar to the target produce steeper logarithmic curves (Figure 1B). Notably, if linear regressions were performed on truncated sections of this RT x Set Size functions, most of these data would yield very small linear slopes (in all cases, below the traditional 10ms/item “benchmark” for efficient search). Given these results, we proposed that lure items are processed in the first parallel stage of vision to the degree that there is sufficient evidence to reject them as possible targets. Naturally, candidates go through this parallel stage because the resolution limitation at this stage of processing means it cannot differentiate them from the target. Locations where information is not differentiated in that manner are passed on for analysis by the second stage of focused spatial attention. Further, the relationship between lure-target similarity and the slope of the logarithmic function indicates that lure-target similarity determines the efficiency of processing for each individual lure item. We developed the following set of hypotheses to construct a theoretical model of stage-one visual processing that allows us to understand variability in stage-one processing times: • (1) Consistent with traditional assumptions of early visual processing (e.g., Treisman & Gelade, 1980; Wolfe, 1994), we proposed that stage-one processing has a parallel architecture and unlimited capacity. Hence, all items in the display are simultaneously processed with a rate that doesn’t depend on set size. • (2) During stage-one processing, the visual system is attempting to make a binary decision at each location where there is an item. The question is: is this item sufficiently different from the target? If so, the item is unlikely to be the target and doesn’t require further processing. If it is sufficiently similar to the target, given the resolution limitations of peripheral vision, the item will require further processing and its location will be passed on to stage-two processing. An eye movement or a deployment of focused attention will be required to resolve and inspect the item to determine whether or not it is the target. • (3) The amount of evidence required to reach a decision about an item (the “decision threshold”) is proportional to its similarity to the target. This follows from the idea that the more visually similar an item is to the target, more information is needed to determine that the item is indeed not the target and will not require further inspection. Given the resolution limitation of peripheral vision, there is a maximum decision threshold. All locations containing items that reach that level (i.e., items too similar to the target and the target itself) will be passed on to the second stage of processing. In order to make explicit predictions with this theory, we specified the following assumption to model individual item processing times: • (4) Processing of individual items is modeled by noisy accumulators. The rate of information accumulation at each instant is drawn from a Gaussian distribution with a positive mean value. Processing is complete when accumulated evidence reaches a decision threshold. As proposed above, the decision threshold is proportional to the item’s similarity to the target. This process is thus mathematically equivalent to a Brownian motion with a constant drift rate towards a given threshold. Completion time t of this process follows the Inverse Gaussian distribution (Chhikara, 1988): (I) $f\left(t|A,k,\sigma \right)=\frac{A}{\sqrt{2\pi {\sigma }^{2}{t}^{3}}}{e}^{-\frac{{\left(A-kt\right)}^{2}}{2{\sigma }^{2}t}}$ where A is the accumulator’s threshold, k is the constant drift rate (or mean accumulation speed), and σ is the standard deviation of accumulated information at each instant. These assumptions enabled us to numerically simulate different implementations of parallel, unlimited capacity processing system, and derive the expected time cost as a function of number of items to be processed, modulated by the similarity of items to the target. Specifically, following the pioneering work by Townsend and Ashby (1983), we implemented different termination rules (self-terminating vs. exhaustive) in systems with or without resource reallocation in the case of efficient search (see Buetti et al. (2016) for results and detailed methods of these simulations in their Figure 3 and Appendix A.). Our simulation results indicated that only a system with an exhaustive termination rule (i.e., the stage is complete once all items are fully processed) and no reallocation of resources produces logarithmic curves. Further, we demonstrated that in such cases, the steepness of these logarithmic curves are modulated by similarity of lure items to the target just as observed in our experiments (see Figure 1B). In other words, we demonstrated a one-to-one correspondence between decision thresholds in our accumulator models and the slopes of the logarithmic completion times, such that smaller decision thresholds produce flatter logarithmic slopes and larger decision thresholds produce steeper logarithmic slopes. In sum, we found evidence (based on empirical data and a set of reasonable assumptions) that stage one in visual search functions as a parallel, unlimited capacity, exhaustive processing system. When there are no candidate items in the display (other than the target), this model can account for all the systematic reaction time variation caused by changes in the number of lure items. Our simulations combined with our behavioral results suggest that the coefficient of the logarithmic slope observed in behavioral experiments can be interpreted as an index of lure-target similarity, as it reflects the amount of evidence required to reject the lure as a possible target (Buetti et al., 2016, Appendix A). ### Predicting performance in lure-heterogeneous displays Given our current theory of stage-one processing in visual search, one intriguing application is to the understanding of performance in search tasks with multiple types of distractors presented simultaneously. Many laboratory experiments on visual search use highly homogeneous displays, i.e. the distractor items are either completely identical, or composed of groups that differ from each other in only one feature dimension. In the real world, however, an arbitrary scene often consists of mostly non-repeating objects. When a specific target is defined, it is also usually the case that most non-target objects are highly dissimilar to the target, so that very few of them need to be actually examined (Neider & Zelinsky, 2008; Wolfe et al., 2011). Thus, it seems that many visual search-like tasks performed in the real world will be best conceptualized as search for the target amongst a heterogeneous set of lure items. Notice that many conclusions drawn from homogeneous search tasks cannot be easily extended to a heterogeneous search scenario simply. Duncan and Humphreys (1989) already pointed out that distractor-distractor similarity (or heterogeneity in the distractor set) has an effect independent of target-distractor similarity. Guided Search theory (Wolfe, 1994) proposed that top-down attention could ‘guide’ parallel processing by prioritizing items with specific feature values of the target. Yet in the real world, objects are defined as conjunctions of many different feature dimensions so that groups of objects can share a few features, while still being the case that each object is sufficiently dissimilar to every other one along several feature dimensions. Nordfang and Wolfe (2014) found that in the case of high feature dimensionality, the effect of heterogeneity in visual search could not be explained by a linear summation of the ‘guidance’ afforded by each feature dimension. Therefore, the difference or relationship between homogeneous and heterogeneous search is still relatively unclear. One prominent aspect of our current theory is that it emphasizes the role of visual similarity in the parallel stage of processing, and it makes a more specific formulation of the effect of target-distractor similarity in comparison to Duncan and Humphreys (1989) and other previous theories. Further, the concept of visual similarity is abstract enough to be applied to both artificial and naturalistic stimuli alike. Hence, we expect our previous results to extend to tasks using natural images as search items. Specifically, efficient search should always be modulated by lure-target similarity, and should produce logarithmic RT by set size functions, when observers are looking for a specific target. For example, search for a teddy bear target among an array of toy pandas and model cars should both produce logarithmic RT by set size functions because both toy pandas and model cars look sufficiently dissimilar to the target. And the function for toy panda lures should be steeper than the log curve produced by a search for a teddy bear among model cars, as long as the toy panda is visually more similar to the teddy bear than the model car. More importantly, the degree of similarity between one distractor item and the target item should not depend on what other objects are present in the scene, or whether the distractor set is homogeneous or heterogeneous. Moreover, as mentioned above, our theory and results suggest that the ‘slope’ of the logarithmic function measured in homogeneous search can be a valid behavioral index of lure-target similarity. Hence, in principle, we should be able to predict search times in lure-heterogeneous displays based on participant’s performance on lure-homogeneous displays. This follows because, as we just mentioned, our model proposes that there is a one-to-one correspondence between accumulation thresholds and the log slope coefficients of homogeneous search. If this is correct, then we should be able to derive accumulation thresholds for each lure type from the observed log slopes of lure-homogeneous search data. Then, we should be able to use these thresholds to predict search RT in novel, heterogeneous scenes.1 This fact illustrates the generalizability and specificity of our theory: it makes specific RT predictions for performance in novel, untested experimental scenarios. We can then compare the RT predictions to observed experimental data to test the accuracy of the model. Further, systematic deviations from the model’s predictions can be used to infer undiscovered properties of human parallel processing, as we demonstrate below. There are two obstacles that need to be resolved before we can take on this approach. The first issue is that an analytical solution for stage-one processing time based on our current model is not readily available, which means that given observed log slope values, we cannot directly compute the corresponding accumulator thresholds. This is because even though the individual accumulator’s completion time is well understood (formula 1), in the case of heterogeneous displays (where individual completion times are sampled from multiple groups of different Inverse Gaussian distributions), the maximum of all items’ completion times (since our model assumes an exhaustive termination rule) requires an integral that seems to be analytically unsolvable.2 To circumvent this issue, in Buetti et al. (2016) we used a computational simulation approach to find numerical mappings between thresholds and log slopes. This is the same approach that we will use here to make numerical predictions of heterogeneous search performance. Specifically, we developed several equations predicting heterogeneous search time based on different theoretical assumptions, and compared their predictions to simulated heterogeneous search results. The best-performing equation was taken as the prediction of our theory, in lieu of the exact analytical solution. A second issue lies in the fact that our model assumed individual items’ processing are independent of each other, and this assumption was not directly backed by evidence. In Buetti et al. (2016), we rejected one type of processing interaction: the resource-reallocating model. We were able to do so because this family of models produces a qualitatively different RT by set size function than non-reallocating models (i.e., a monotonically decreasing function). However other types of processing interactions are possible. In particular, models where lure-to-lure interaction effects are additive or multiplicative and constant over the time course of stage-one processing could not be ruled out. This is because in Buetti et al. (2016) we made a qualitative comparison between the various simulation results and the shape of the observed RT by set size functions in human participants. Thus, if homogeneity or heterogeneity in the search scene were to introduce a constant (additive or multiplicative) change in the processing of items, the overall shape of the RT functions would still be logarithmic and our model’s predictions would be inaccurate by either an additive or multiplicative factor. For instance, one might expect that display heterogeneity slows down overall processing, or the reverse, that display homogeneity facilitates processing via a mechanism like the ‘spreading suppression’ originally suggested by Duncan and Humphreys (1989). This issue is therefore an empirical question, and we resolved it by designing empirical tests of our theory’s predictions. To anticipate, by examining the deviations of observed heterogeneous search performance from the predictions based on the independence assumption, we gained insight to what type of interaction might be taking place in homogeneous displays to facilitate homogeneous search performance. ### Strategy adopted in our computational approach and predictions First, we began by simulating homogeneous search completion times for three types of lures by using a different accumulation threshold for each lure type. We then estimated the log slope coefficients for each of these lure types by finding the best-fitting logarithmic slope coefficient for the function relating number of lures to completion times. We refer to these slopes as D values. Second, we ran simulations of completion times for heterogeneous search scenes. Each scene was composed of a varying number of each of the three types of lures. Processing of every lure was modeled by the same type of accumulators, with the same accumulation thresholds as used in homogeneous displays. In other words, here we assumed that lure-target similarity is context-independent, that is, the degree of similarity between one type of lure and the target should not depend on what other objects are present in the scene, or whether the distractor set is homogeneous or heterogeneous. Third, based on different assumptions about how the processing of heterogeneous search scenes might unfold, we developed four different theoretical models of stage-one processing. For each of these models, we derived a hypothetical equation that approximates the completion time as a function of the number of lures of each type present on the display and each lure type’s logarithmic slope coefficient (i.e., the D values extracted from lure-homogeneous simulations). The models and their corresponding equations were pre-registered on the OpenScienceFramework website, in the context of the pre-registration for Experiment 2 (osf.io/2wa8f). Fourth, we compared the simulated completion time for each display with the completion time predicted by each of the four processing models. This comparison allowed us to select the best-performing equation as the optimal formulation of the relation between homogeneous and heterogeneous search performance. As a reminder, this series of steps was necessary because we do not know of an exact analytical solution for completion times in heterogeneous displays using our accumulators (see Note 2). The detailed methods and results of this simulation procedure is presented below in the Predictive Simulation section. Finally, we did an empirical test of our computational model by using the best-performing equation to predict behavioral data. That is, we used performance observed in a homogeneous search experiment (Experiment 1) to predict performance in a separate experiment with heterogeneous displays (Experiment 2). This amounts to a rigorous empirical test of our computational model and of its underlying assumptions. Using Experiment 1 data, we can estimate best-fitting log slopes for each type of lure type. We can then use those estimated D values in conjunction with the best-performing equation to predict what RTs ought to be in the various heterogeneous conditions tested in Experiment 2. We can then compare predicted RTs with observed RTs to test whether the equation favored by our theory-based simulation also outperforms all other alternative equations in behavioral data. This comparison allowed us to assess the predictive power of our theory. This predicted-to-observed RT comparison will also be used to investigate whether individual items are processed independently of one another or whether there are inter-item interactions that produce systematic deviations from the model. Because our simulation was conducted under the assumption of between-item independence, the predictions of this equation naturally carry that assumption along. Hence, any systematic deviation of predicted stage-one processing times from observed heterogeneous search data can be interpreted as effects of homogeneity (or heterogeneity, depending on the viewpoint) on processing times. To estimate the deviations from our model, we fit predicted stage-one processing times to observed data via a linear regression. On the one hand, if the model allowed for a perfect prediction, then observed and predicted stage-one time costs should line up precisely along the y = x line, with y being observed stage-one time costs on every heterogeneous condition tested, and x being the predicted stage-one time costs for each condition, based on the model and the parameters obtained from homogeneous displays. We would then conclude that items are always processed independently of context. On the other hand, if there are systematic deviations from the predicted stage-one time costs in observed data, these indicate that lures are not processed in a context-independent fashion. Two types of systematic deviations can be therefore obtained: a multiplicative deviation and an additive deviation. If the estimated slope coefficient between observed and predicted stage-one time costs is different from 1, then this would indicate a multiplicative effect of homogeneity on single-item processing. If the intercept coefficient of the observed-by-predicted time cost function is different from 0, this would indicate an additive effect of homogeneity on stage-one processing times (the observed time costs would then be shifted below the y = x line). As a summary, the current study consists of a predictive simulation and two experiments. The simulation provides a best-performing equation for predicting the completion times for stage-one processing in heterogeneous visual search, using parameters obtained from homogeneous visual search displays. Next, we used a set of real-world object images to construct homogeneous search displays in Experiment 1 and heterogeneous search displays in Experiment 2. Experiment 1 served both as an extension of our previous ‘feature search’ results (as in Experiment 1 of Buetti et al., 2016) to real-world objects and as the data source for estimating log slope (D) coefficients for predicting heterogeneous search data in Experiment 2. In Experiment 2, we collected behavioral data of visual search with heterogeneous displays each containing a mix of the objects used in Experiment 1, using a different group of subjects. By comparing different equations’ predictions of RT to observed RT, we were able to (1) test whether the best-performing equation from our theory-based simulation works best in reality, and (2) examine whether there is a multiplicative or additive effect of homogeneity on stage-one processing and if so, estimating the magnitude of that effect in our data. ## Predictive Simulation ### Methods #### Approximating equations for heterogeneous search We developed the following set of equations in the hope that some of them may be a good approximation of the exact analytical solution of stage one processing time cost in heterogeneous lure search. Each equation describes time cost of stage-one processing, T, as a function of the D coefficients and numbers of each type of lure items (N), i.e. T = f({Di}, {Ni}). The D coefficients here are meant to be a proxy of the accumulation threshold values, and thus are assumed to be independent of context (homogeneous or heterogeneous). Therefore, given the D coefficients estimated based on homogeneous search, the equations provide different predictions of heterogeneous search time, based on different underlying hypotheses of how search unfold in a heterogeneous scene. For each equation, its form in the case of 3 types of lures will be presented below (rather than the general form with arbitrary number of lure types). D coefficients will have the ordering of D3 > D2 > D1 > 0, i.e. we denote lure no.3 to have the highest similarity to the target and lure no.1 with lowest similarity. Note that these equations do not include time costs associated with other processing stages, such as encoding, response selection, and execution, which we assume to be constant in efficient search tasks for a given target. (Equation 1:) This equation was a simple extension of the concepts in Buetti et al. (2016). We assumed that: (a) all lures are processed in parallel; (b) that evidence stops accumulating at a location once the decision threshold for that stimulus has been reached; (c) that evidence continues to accumulate at locations where decision thresholds have not been reached. At the aggregate level, this means that lures with lower decision thresholds will be rejected sooner than lures with higher decision thresholds. This reduces the number of “active” accumulators over time. Imagine a display with blue, red and orange lures, and assume that blue is the least similar lure to the target, followed by red, then orange. In this model, blue lures would be rejected first, then red lures, and finally orange lures (on average). As an example of this model, consider the case of the equation above where there are three different types of lures in the scene (i.e., three different decision thresholds, with D3 > D2 > D1 > 0), with Ni being the number of lures of type i (i = 1, 2, 3). The first term represents time cost for all 3 types of lure items to arrive at the evidence threshold for lure type 1 (D1). Here, lures of type 1 are rejected. Then, evidence for lures of types 2 and 3 continues to accumulate. However, some evidence about these lures has already been accumulated (dictated by D1). Thus, the second term represents the additional time cost to arrive at the decision threshold for lures of type 2 (D2) for lure types 2 and 3 (hence the term D2D1), and so on. (Equation 2:) Equation 2 above assumes that each group of lures is considered and rejected sequentially. That is, different types of lures are processed in a serial and exhaustive fashion, while within each type of lure, individual items are processed in parallel. This model would mean that first all blue lures are processed and discarded in parallel, then the red ones, and last the orange ones. The big difference between Models 1 and 2 is that in Model 2, accumulation for red ones will start once the blues have been discarded, whereas in Model 1, accumulation for all types of lures starts simultaneously and rejected lures “fall off” while the other ones continue to accumulate evidence. (Equation 3:) Equation 3 represents a model that has a single decision threshold associated with the single D value in the equation. The model predicts that while all items are being processed in parallel and exhaustively, the amount of information required to complete processing is determined by the lure with the highest similarity to the target. This can be understood as there being a single decision threshold for the entire display: items below it will be discarded at the same moment, while items above the threshold will require focal inspection (i.e., are likely targets). This kind of idea has been proposed in the literature in various papers (e.g., Guided Search, Wolfe, 1994; TAM, Zelinsky, 2008). So, in the example above, when all three types of items are present in the display, the decision threshold used would be the one for orange lures, whereas when only blue and red lures are present, the decision threshold for red would be used. (Equation 4:) Equation 4 serves as an alternative to equation 3. Here the log slope is estimated by the mean of the 3 types of lures (instead of the max), while all items are still processed exhaustively in parallel. We note here that the above 4 equations include variations in different aspects of processing across lure types. Equation 1 is the strongest extension of our theory since it assumes both parallel processing and independence across lure types. Equation 2 assumes independence but serial processing across lure types, whereas equations 3 and 4 assumes parallel processing but with interaction between different lure types. #### Simulation and Analyses The goal of this simulation is to find out which of the 4 equations above best accounts for simulated time costs of stage-one processing in heterogeneous scenes. The critical parameters are the threshold values (representing different lure-target similarity levels), the drift rate k (rate of information accumulation that is sampled from the same Gaussian distribution regardless of scene context) and noise range σ. We used two sets of parameters representing two different sets of stimuli to simulate stage-one completion times under the same model architecture. Choosing two different sets of parameters estimates allows us to be confident that our simulations and our equations are not overly dependent on any specific parameter, and that in fact, they generalize well across the parameter space. In both runs of simulations, we simulated displays containing at most three types of lure items to ensure a sufficient degree of heterogeneity without requiring too many different display conditions. Simulation no. 1 had a target item whose threshold was 20, and three types of lure items with thresholds of 15, 17, and 19. The drift rate k was fixed at 4 and noise range σ was also a constant 2. Simulation no. 2 had a target threshold of 62, three lure thresholds at 48, 53, and 58, with drift rate of 9 and noise range of 4. In each simulation run, threshold values, drift rate, and noise range parameters were held constant. Given a specific set of parameters, the simulation procedure and algorithm can be described as follows: 1. For each type of lure item, we simulated homogeneous search time as a function of set size. At set size N, there were 1 target item and N-1 lure items. The target item’s processing time was found by randomly sampling from an Inverse Gaussian distribution defined by the target threshold At, the drift rate k and the noise range σ. Each of the N-1 lure items was similarly simulated by sampling from another Inverse Gaussian distribution with a lure threshold value Al and the same k and σ. The overall processing time was simply the maximum of all individual items’ processing times (i.e. the exhaustive processing rule). Because of the randomness in the sampling procedure, we took the mean processing time cost of 2000 repetitions at each set size as the final output. 2. For each type of lure, we computed a regression of $RT=\stackrel{^}{D}ln\left(N\right)+\stackrel{^}{a}$ based on the simulated results from step (1). The estimated coefficients $\stackrel{^}{D}$ and â were used for predicting heterogeneous search time cost. 3. We simulated heterogeneous search time with different combinations of the 3 types of lures. For each simulated display condition, each type of lure could appear 1, 3, 7, or 15 times with one or two other lure types, which yielded a total of 111 unique combinations or conditions (see Appendix A for a complete list of these conditions). Processing time costs were then simulated in the same way as in step (1), with 2000 repetitions per condition. 4. For each of the 4 approximating equations, we computed the predicted completion times for each display condition simulated in (3) using the estimated $\stackrel{^}{D}$ and â coefficients from (2). We then compared predicted completion times (T) against simulated T values by computing a linear regression for each equation to estimate the equation that best fits the simulated data. We used several diagnostics for goodness of fit including the R square, log likelihood, and the slope and intercept coefficients of the regression models. ### Results We plotted simulated completion times for heterogeneous scenes against predicted completion times as scatterplots, for each equation and for both simulation runs in Figure 2. The y = x line is also plotted for reference. Table 1 summarizes regression model characteristics for each equation in both simulation runs. These characteristics describe how well predicted processing times match simulated processing times. In both simulations, Equation 1 had the highest R-squares, the slope coefficient closest to 1, and the estimated intercept closest to 0. Predictions of Equation 1 also fell closest to the y = x line in Figure 2 for both simulations. Figure 2 Predicted processing time according to the 4 approximating equations for heterogeneous processing times plotted as a function of simulated processing times. Panels A and B present results from two different simulation runs using different sets of parameters (see text for more details). Figure 3 Example search displays for Experiment 1 (top row) and Experiment 2 (bottom row). Table 1 Model characteristics of linear regressions of simulated processing times as a function of predicted processing times. Simulation 1 Simulation 2 R2 Log likelihood Slope (Standard Error) Intercept (Standard Error) R2 Log likelihood Slope (Standard Error) Intercept (Standard Error) Eq 1 0.9615 114.031 1.005 (0.019) –0.109 (0.127) 0.9637 124.179 1.008 (0.018) –0.152 (0.156) Eq 2 0.8045 23.816 0.463 (0.021) 3.075 (0.163) 0.8095 32.107 0.482 (0.022) 3.871 (0.203) Eq 3 0.7934 20.758 0.877 (0.043) 0.570 (0.291) 0.7709 21.878 0.861 (0.045) 0.905 (0.383) Eq 4 0.8049 23.953 1.112 (0.052) –0.652 (0.338) 0.7852 25.452 1.137 (0.057) –1.058 (0.466) From these results, we can conclude Equation 1 is our best performing equation for predicting heterogeneous lure search based on performance metrics from homogeneous displays. In the next section, we will consider empirical data based on human participants in both lure-homogeneous (Experiment 1) and lure-heterogeneous (Experiment 2) search tasks. ## Experiment 1 Experiment 1 serves two purposes. First, it allowed us to estimate three different lure-target similarity coefficients in homogeneous displays to be used to predict performance in Experiment 2 (heterogeneous displays). In addition, it allowed us to extend the findings from Buetti et al. (2016) to real-world stimuli. Our previous results were based on two groups of simple stimuli with relatively few distinguishing features (group 1: find red triangle or blue half circle among orange diamonds or yellow triangles or blue circles; group 2: find red T among red Ls and thin orange crosses or thick red crosses or orange crosses or orange squares; see Figure 4 in Buetti et al., 2016 for an illustration of these stimuli). Figure 4 Reaction times for Experiment 1 plotted as a function of set size and lure type. Curves indicate best-fitting logarithmic functions. The legend shows the analytical form of each of these functions as well as corresponding R-squares as a measure of fit. Error bars indicate one standard error of the mean. Images of search stimuli and the corresponding data symbols are presented on the right. ### Methods Participants. Twenty-six participants were recruited through the Course Credit Subject Pool in the Psychology Department at the University of Illinois at Urbana-Champaign. Participants signed up for the experiment through the Department’s subject pool website. Prior to participating in any experiments, participants must fill out a Screening Questionnaire that can be used by experimenters to filter out participants that do not meet recruitment criteria. In our case, we used this questionnaire to make sure only participants without self-reported color-vision deficiencies could sign up for our experiment. Upon arrival to the lab, they were also screened for normal color vision using the Ishihara color test (10 plate edition, with the standard number tests). No participants were excluded due to abnormal color vision or low visual acuity. All participants gave written informed consent before participating in the experiment. We excluded 3 participants whose overall accuracy was below 90%. For the 23 participants included in analysis, their age ranged from 18 to 24 years, 14 are female, 21 were right-handed. This experiment has been approved by the Institutional Review Board of the University of Illinois at Urbana-Champaign. Apparatus and Stimuli. Stimuli are presented on a 20-inch CRT monitor at 85 Hz refresh rate and 1024*768 resolution. Participants sat in a dimly lit room at a viewing distance of 75 cm. The experiment was programmed with Psychtoolbox 3.0 (Kleiner et al., 2007) in the MATLAB environment, and run on 64 bit Windows 7 PCs. Search objects were chosen from a collection of images studied by Alexander and Zelinsky (2011), which were originally sampled from Cockrill (2001) and the Hemera Photo-Objects collection. Alexander and Zelinsky (2011) obtained visual similarity ratings on these images using computational models and human subjects’ subjective ratings. Using their results, we selected groups of images that were consistently rated as having high or medium similarity to the teddy bear category to be used as distractor items. Specifically, we chose a red humanoid ‘carrot man’, a white reindeer toy, both of which were consistently rated as highly similar to the teddy bear category, and a gray model car rated as having medium similarity. We also chose a specific teddy bear as the target item. These images of objects were presented with sizes of approximately 1.3 degrees visual angle horizontal and 1.7 degrees visual angle vertical. All images had a small red dot overlaid on the left or right side, with a diameter of 0.2 degrees of visual angle. In each search display, there was always only one target and at most one type of lure item. The items were randomly allocated onto the screen based on an invisible 6-by-6 square grid that spanned 20 degrees of visual angle horizontally and vertically. Each item’s actual location was then randomly jittered within 1 degree horizontally and vertically. On average the minimal distances between two items (i.e. the distance between two adjacent grid points) was 3.5 degrees. The grid was populated with equal (or approximately equal) numbers of items in each of the four quadrants of the screen. A white fixation cross was also presented at the center of the screen, spanning 0.6 degrees vertically and horizontally. All displays had a gray background with a color vector of [121, 121, 121] in RGB color space. Figure 3 presents examples of search displays for Experiments 1 and 2. Procedure. At the beginning of the experimental session, instructions were both shown on the screen and delivered verbally to participants. They were told to look for the target teddy bear (whose image was shown on the screen) and respond to whether the red dot appeared on the left or right side on the bear. They were asked to press the left arrow key with their left index finger when the red dot was on the left, and right arrow key with the right index finger when the dot was on the right. Speed and accuracy of response were equally prioritized. Trials started with a brief presentation of the central fixation cross, with a duration randomly selected from 350 to 550 ms. Then, the search scene was displayed for a maximum duration of 2.5 seconds. The display turned blank as soon as the participant pressed a response key. On error trials, a warning tone (1000Hz sine wave lasting 250 ms) was played. The inter-trial interval was selected randomly between 1.4 to 1.6 seconds. Each experiment session started with a practice block of 32 trials. Design. The two main independent variables, lure type, and set size, were fully crossed within-subjects. There could be 1, 4, 9, 19, or 31 lures of the same identity on the display along with one target item (so that total set sizes were 2, 5, 10, 20, or 32); additionally, there was a target-only condition where only the target image appeared on the screen. Therefore, there were a total of 3 × 5 + 1 = 16 experimental conditions. The location of the red dot on the target image was pseudo-randomized to ensure that it appeared on the left or right equally often. Locations of red dots on lure images were randomized with 0.5 probability on the left or right. Each condition was repeated 50 times so that there were 800 trials total in one experimental session. All conditions are randomly intermixed. There were short break periods every 20 trials that lasted up to 20 seconds if participants did not resume the experiment sooner. ### Results We compared regression models based on logarithmic and linear RT by set size relationships using R square and log likelihood as measures of goodness of models.3 In order to test an alternative hypothesis that the results could be better described by a bi-linear model assuming a transition point at set size 2, we also compared the log and linear models using data without the target-only condition. These results are summarized in Table 2. When the target-only condition was included, logarithmic models clearly outperformed linear models, indicating that a logarithmic model is more accurate and plausible in describing the data than a simple linear model; without the target-only data point, logarithmic models still consistently had higher R-squares and log likelihoods than corresponding linear models. In Figure 4 we plotted reaction time against set size, separating the three groups of data by lure type, along with the best-fitting logarithmic curves for each lure type. The estimated logarithmic slope coefficients for each type of lure were: Dred carrotman = 66.278, Dwhite reindeer = 28.492, Dgrey car = 26.581. In sum, the results show that the RT by set size functions found in this experiment are best characterized by a series of logarithmic functions. Table 2 Logarithmic vs. linear regression results of RT by set size functions in Experiment 1. With target-only condition Without target-only condition Logarithmic Linear Logarithmic Linear R square Log Likelihood R square Log Likelihood R square Log Likelihood R square Log Likelihood Red carrot man 0.933 –27.178 0.713 –31.558 0.965 –18.718 0.924 –20.650 White reindeer 0.930 –22.251 0.619 –27.351 0.951 –15.365 0.711 –19.827 Grey model car 0.852 –24.354 0.595 –27.380 0.938 –14.636 0.919 –15.300 It should be noted that the estimated ‘linear slopes’ were very small and would be categorized as ‘efficient’ or ‘pop-out’ search according to traditions in the literature. When the target-only condition was included, the estimated linear slope coefficients were 6.380 ms/item, 2.559 ms/item, and 2.446 ms/item for carrot man, reindeer, and model car lures, respectively. Without the target-only condition, these changed to 4.531 ms/item, 1.727 ms/item, 1.503 ms/item. A within-subjects ANOVA using lure type and set size as factors on correct RTs was also conducted. Main effects were significant for both lure type, F(2, 44) = 265.37, p < 0.001, Cohen’s f = 3.47, and set size, F(5, 110) = 217.69, p < 0.001, f = 3.15. More importantly, the interaction between set size and lure type was significant, F(10, 220) = 54.13, p < 0.001, f = 1.57. These results indicate that the different levels of lure-target similarity lead to different magnitudes of set size effects, i.e. lure-target similarity modulated search processing efficiency. To further understand this difference in search efficiency, we also computed individual subjects’ logarithmic slope estimates and use t-tests to compare the mean log slope for different pairs of lures. Consistent with the visual pattern in Figure 4, we found that the mean log slope for the red carrot man lure was significantly larger than both the mean log slopes for the white reindeer lure, t(22) = 15.85, p < 0.001, Cohen’s d = 3.31, and for the grey model car, t(22) = 15.54, p < 0.001, d = 3.24, while there was no significant difference between the log slopes for reindeer and model car, t(22) = 1.42, p = 0.17. ### Discussion Overall our results provided evidence that a logarithmic function better captured the relationship between reaction time and set size in efficient searches with real-world stimuli than linear models. Importantly, this conclusion was not contingent upon whether the target-only condition was included in the analysis. Additionally, the steepness of the logarithmic curves depended on the similarity between target and lures: the higher similarity, the steeper or more inefficient search functions. This pattern of results extends our previous findings to real-world stimuli and corroborates the notion that visual similarity modulates early parallel visual processing, regardless of whether search objects differ from each other along a couple or multiple feature dimensions (Buetti et al., 2016). We can also conclude that all the distractor objects used in Experiment 1 are sufficiently different from the target teddy bear that they can be efficiently processed in the first, parallel stage of visual processing. We can, therefore, use these stimuli to study how this processing stage handles heterogeneous search scenes. We should note that there is some difference between the similarity relationship reflected in our visual search results and Alexander and Zelinsky (2011)’s ratings. Our data suggested that the white reindeer and grey model car were equally dissimilar to the target teddy (i.e., their search slopes were almost identical). In contrast, in Alexander and Zelinsky’s data, the reindeer was rated as being of high similarity to teddy bears, whereas the grey car was rated as having a medium similarity (we used these ratings when we first selected the stimuli for this experiment). Several factors can be identified to account for this apparent inconsistency. Ratings in Alexander and Zelinsky’s study were obtained using a ranking method. Five images were presented on screen and participants had to rank-ordered them according to their visual similarity to a teddy bear (no ties allowed). Note that the influence of non-visual factors cannot be ruled out in this ranking procedure: even if a reindeer is in fact equally visually dissimilar to a teddy bear as a car, at the moment of ranking which of the two (reindeer or car) is more similar to the bear, the conceptual similarity of the reindeer to the teddy bear (both four-legged animals) might lead observers to give the reindeer a higher similarity rank than the car. More importantly, the requirements of the two tasks are different, so that the nature of ‘similarity’ computed may be different. In the ranking task, participants try to make a multidimensional decision with as much time as needed to compare stimuli. In this case, participants might decide to weight down differences along single dimensions when there is strong agreement along multiple dimensions. For instance, if a red teddy bear is to be judged in similarity against a green teddy bear, the match along shape, texture, size, and even semantic features might make participants judge them as being highly similar. In contrast, in a search task, the visual system tunes towards feature contrasts between target and distractor stimuli that can quickly locate the target in the scene. In the teddy bear example, the color contrast between the target (green teddy bear) and the distractors (red teddy bears) is likely to override the similarity along the other dimensions, such that participants will find the green teddy bear very fast and efficiently in spite of its overall level of similarity to the red teddies. In this sense, perhaps it’s more appropriate to consider the modulation of log slopes as an effect of dissimilarity rather than similarity. In our experiment, the log slopes observed for reindeer and car lures might indicate that something along those lines is occurring. It is possible that the color contrast was similar for both target-lure pairings (the target is brown, the lures are white and grey) and that this particular feature-contrast was most responsible for locating the target in the scene. ## Experiment 2 In Experiment 2, we used the same stimuli as in Experiment 1 to construct heterogeneous search displays. We then compared observed RTs with predicted RTs from the four equations described in the Predictive Simulation section. We had two goals. The first goal was to determine which of the four equations best predicted human performance. The second goal was to evaluate what kind of systematic deviations exist between our theory-based RT predictions and human data. Because of the limited number of conditions we could afford within one experimental session (about 50 minutes), we designed five different subsets of conditions of heterogeneous displays, characterized by different types of lure combinations (see Table 3). We analyzed RT data from each subset of conditions separately as well as all conditions combined. This allowed us to evaluate whether different ways of mixing the lures produced different patterns of results. This experimental design and planned analyses were pre-registered on the Open Science Framework (https://osf.io/2wa8f/), including the description of the four predicting equations. Table 3 Description of all the conditions tested in Experiment 2, organized by subset. In Subsets 1–3 only two lure types were presented in the display with the target, whereas in Subsets 4–5 always contained all 3 types of lures in addition to the target. # red carrot man # white reindeer # grey model car Description Subset 1 0 1 2 Comparable numbers of white reindeer and gray cars 0 3 4 0 7 8 0 15 16 Subset 2 1 0 2 Roughly equal numbers of red carrot man and white reindeer 3 0 4 7 0 8 15 0 16 Subset 3 1 6 0 Fixed 6 reindeer, varying number of carrot man 5 6 0 9 6 0 21 6 0 Subset 4 1 1 1 Roughly equal numbers of all 3 types of lures 2 2 3 5 5 5 10 11 10 Subset 5 1 4 2 Fixed 4 reindeer, comparable numbers of carrot men and cars 3 4 4 7 4 8 13 4 14 Target-Only condition 0 0 0 Baseline ### Methods Participants. Using effect size of the two-way interaction effect in Experiment 1, we estimated that in order to achieve power of 0.8, we needed 19 subjects (effect size f = 1.5685, numerator df = 10, denominator df = 7, actual achieved power = 0.815, computed with G-Power, Faul, Erdfelder, Lang & Buchner, 2007). In anticipation of the need to replace some subjects, we collected data on 26 subjects recruited from the Course Credit Subject Pool at the University of Illinois at Urbana-Champaign. All participants gave written informed consent before participating in the experiment. The same procedure to screen for participants with normal color vision and normal (or corrected-to-normal) visual acuity was used as Experiment 1. No participants were excluded due to abnormal color vision or low visual acuity. No participants in this experiment participated in Experiment 1. Two participants were excluded from analysis because their overall accuracy was lower than 90%. For the 24 subjects included in the analysis, their age ranged from 18 to 22 years and had a mean of 19 years, 12 were female, and 22 were right-handed. This experiment has been approved by the Institutional Review Board of the University of Illinois at Urbana-Champaign. Stimuli and Apparatus. In contrast to Experiment 1 where only one type of lure is present in each search display, displays in Experiment 2 contain 2 or 3 types of lures. These lure items were randomly intermixed across all possible spatial configurations under the constraint that each quadrant of the screen contained the same number of items. All other aspects of the stimuli and apparatus were the same for Experiments 1 and 2. See Figure 3 for examples of search displays. Instruction and Procedure. The experiment procedure and instructions were the same for both Experiments 1 and 2, with the exception that the practice session at the beginning of Experiment 2 had 27 trials. Design. There were 21 total conditions in this experiment, where each condition is specified by the number of carrot men, reindeer and model cars in the display. They are organized into 5 different subsets, each exhibiting a specific kind of variation in the number of lure items, which are detailed in Table 3. Each condition was repeated 38 times, for a total of 798 trials. Location of the red dot on the target image was pseudo-randomized with equal probability of left or right, while for lure images they were randomized with 0.5 probability. Notice that in both Experiments 1 and 2 we had included a target-only condition where the only item in the display is the target. We consider reaction time in this condition to be an important baseline to compare performance across both groups. Mean RT in this condition represents all the RT components that do not depend on set size, e.g. time for visual information to arrive at the cortex, response selection processes, motor response time, etc. In the case of efficient search for a target among lures, the only component depending on set size should be the stage-one processing time, which can be computed by subtracting target-only RT from RT in each of the conditions with mixtures of lures. Notice that this operation is consistent with the property of the logarithmic function, i.e. ln (1) = 0. Since the set size of target-only condition is 1, stage-one processing time is 0 under our current formulation. Thus, subtracting out target-only condition leaves us with a direct measure of stage-one processing time. Data analysis. According to the hypotheses laid out in the introduction, the key analysis for this experiment is a linear regression of predicted RT data to observed RT data. We used the log slope coefficients for each type of lure estimated in Experiment 1. The same fixed parameters were used for all four equations and all experimental conditions (i.e., the RT in the target-only condition and the three log slope estimates from Experiment 1). To predict RTs in a specific condition, we used the numbers of each type of lures (as indicated in Table 3). Predicted RT for each condition is the sum of the predicted stage-one time cost (derived separately from Equations 1–4) plus the mean RT of the target-only condition (which contains the time costs for other cognitive stages such as encoding, response selection, and execution). The analysis consisted in linear regressions for observed RTs as a function of predicted RTs for each subset of conditions. Because there were four equations to be compared, there were be four different set of predicted RT values. Each set of predicted RT values are based on all 20 non-target-only conditions of the experiment. Thus, four regressions were performed using observed RTs as the dependent variable and each set of predicted RTs as the independent variable. To compare the performance or ‘goodness of fit’ across the four equations, we computed the R-square and the Akaike Information Criterion (AIC; Akaike, 1974) values for each model. These computations were carried out using the fitlm() function, the R-squared and the ModelCriterion methods of the Linear Model class in Matlab. In a further analysis focused on the best-performing equation, we analyzed whether the human data had any systematic deviations from the model. We interpret this deviation as the effect of heterogeneity/homogeneity on stage-one processing. As described before, we were interested in identifying either additive or multiplicative deviations. To estimate them, we performed six regressions on stage-one time costs obtained from the best-performing equation, one for each subset of conditions (5 total), plus one overall regression combining all conditions. In this manner, the estimated intercept coefficients become a useful indicator of any systematic lure-to-lure interaction effects in homogeneous search. If this interaction does not cause an additive difference, then the estimated intercept should be equal to zero, assuming that our best performing equation provides a truthful prediction. Hence, any substantial difference between the estimated intercept and zero becomes a measure of the magnitude of this additive effect. By the same logic, deviation of the estimated slope coefficient from 1 represents a multiplicative effect of inter-lure interaction in homogeneous search displays. ### Results Descriptive statistics. Mean RTs are plotted for each condition, grouped by Subset, in Figure 5. Visual inspection shows that Subset 1 was uniquely separated from the other subsets. Subset 1 was the only one that did not contain images of the carrot man stimulus. This pattern was confirmed when we performed logarithmic regressions to each Subset of data (see Table 4). The log slope of Subset 1 (D = 26.881) was very close to the homogeneous slope estimates of white reindeer (Dwhite reindeer = 28.492) and of the grey car (Dgrey car = 26.581) from Experiment 1, in spite of the fact that in Subset 1, there were two types of lures always present in the display. In contrast, the log slopes for Subsets 2 to 5 were all greater than the homogeneous slope estimate of the red carrot man (Dred carrotman = 66.278) from Experiment 1. That is, even though in each of the Subsets 2–5, the red carrot man was paired with stimuli that were lower in similarity to the target, the processing time was increased compared to displays containing only carrot man stimuli. Finally, it is worth noting that the regressions for all Subsets had very large R-squares. This indicates that for all subsets, the underlying processing was consistent with the parallel, exhaustive nature of stage one proposed by our theory. It should be noted, however, that the logarithmic RT-set size pattern for each of the Subsets is dependent on the fact that within each Subset, the proportions of each type of lure were roughly constant (by our design). If such proportion constancy is absent, there’s no a priori reason to believe any group of heterogeneous search data will exhibit a logarithmic function. Figure 5 Reaction times in Experiment 2 as a function of set size, grouped by the different subsets of conditions. Error bars indicate one standard error of the mean. Curves are best-fitting logarithmic functions, see Table 4 for regression model coefficients and R-squares. Table 4 Logarithmic regression results of search RT for each subset of conditions. Log Slope (D) Intercept R square Subset 1 26.881 663.97 0.9722 Subset 2 79.59 649.38 0.9811 Subset 3 75.183 650.16 0.9307 Subset 4 70.196 654.85 0.9801 Subset 5 71.974 649.76 0.9573 Model comparison. We computed predicted RTs for each of the four equations and regressed those predicted RTs to observed mean RTs. The linear fitting R-squares, AIC values (corrected for small sample sizes) and root mean square errors (RMSE) are summarized in Table 5. Consistent with the results from the Predictive Simulation section, all three measures indicated that Equation 1 was the best-performing equation in terms of precision and likelihood. Specifically, the corrected AIC values indicated that the model based on Equation 1 was 166 times more likely than the second best model (Equation 3).4 Also, the R-square value for Equation 1 (0.9681) is roughly the same as the R-square obtained in the Predictive Simulation section (0.9615, Table 1), when Equation 1 was used to predict simulated heterogeneous search data. This might indicate an upper-bound of predictive accuracy for this equation. The RMSE (which is an indicator of the average amount of prediction error) for Equation 1 was 14.520 ms, which compares favorably to the smallest observed standard error of mean RTs (S.E. = 14.181 ms., in the condition where there were 13 carrot men, 4 reindeer and 14 grey cars). The accuracy of Equation 1’s predictions is all the more remarkable given that these predictions were based on parameters estimated from Experiment 1’s data coming from a different group of participants. Further, the two groups of subjects saw qualitatively different displays. Participants in Experiment 1 only saw homogeneous displays, whereas participants in Experiment 2 never saw homogeneous displays (i.e., they only saw heterogeneous displays). Table 5 Linear regression results of predicted RT to observed RT in Experiment 2. Equation 1 Equation 2 Equation 3 Equation 4 R squared 0.9681 0.9178 0.9480 0.9153 AICc 174.533 194.392 184.757 195.003 RMSE (ms) 14.520 23.298 18.522 23.639 In sum, Equation 1 represents an architecture that is equally successful at predicting performance in simulations as well as in human experiments. Estimating homogeneity effects. To investigate any potential effect of homogeneity facilitation between identical lure items, observed RT were first transformed to observed stage-one processing time by subtracting out the target-only RT. Then, we fitted observed stage-one processing times to predicted stage-one processing times based on Equation 1. Regressions were computed for all conditions combined as well as for each subset of conditions. The resulting coefficients are listed in Table 6 along with standard error of estimates. Table 6 Regression coefficients of the regression of Equation 1’s predicted stage-one processing times to observed stage-one processing times. Intercept Slope Estimate Std. Error 95% C.I. Estimate Std. Error 95% C.I. All subsets –10.961 7.268 [–26.17; 4.25] 1.3328 0.0555 [1.22; 1.45] Subset 1 0.948 5.986 [–18.11; 19.99] 0.9554 0.0938 [0.66; 1.25] Subset 2 –4.392 6.585 [–25.35; 16.56] 1.3587 0.0515 [1.19; 1.52] Subset 3 2.038 16.573 [–50.71; 54.78] 1.2063 0.1179 [0.83; 1.58] Subset 4 –1.242 10.217 [–33.76; 31.27] 1.2905 0.0857 [1.02; 1.56] Subset 5 –1.242 6.277 [–21.22; 18.74] 1.2890 0.0474 [1.14; 1.44] To evaluate whether there was an additive effect of homogeneity on stage-one processing times, we computed and reported 95% confidence interval of both coefficients. The regression on all 20 conditions combined had 19 degrees of freedom, whereas the regressions on each subset had 3. All 6 intercepts’ confidence intervals included zero, indicating that there was no meaningful additive deviation when we predict heterogeneous search time using efficiency parameters (D values) from homogeneous search data. Next, to evaluate whether there was a multiplicative effect of homogeneity on stage-one processing times, we can compare 95% confidence intervals of slope coefficients to 1. The results indicated that slope coefficients were significantly larger than 1 when all conditions were combined, as well as in 3 out of 5 Subsets (specifically, subsets 2, 4, 5 had confidence intervals that were larger than 1). In other words, our best-predicting equation systematically under-predicted stage-one processing time in heterogeneous search tasks by a multiplicative factor. This multiplicative factor is approximately 1.3 for this particular set of search stimuli, and can be viewed as a quantitative estimate of the pure effect of heterogeneity. This pattern of results is visualized in Figure 6. Recall that when making predictions using Equation 1, the log slope parameters for each lure type were estimated from homogeneous search data and that Equation 1 assumed processing independence between individual items. The most straightforward explanation of this multiplicative deviation, then, is that in Experiment 1 when adjacent lures were identical, processing of individual lure items sped up by a multiplicative factor of about 1.3, in logarithmic efficiency space. In other words, this would mean that the estimated D parameters from Experiment 1 were, in fact, under-estimating the true ‘standalone’ processing efficiency of each individual lure item because of the presence of lure-to-lure interactions in homogeneous displays. In contrast, in heterogeneous search, adjacent items are less likely to be identical and thus this type of suppression is less likely to take place and improve performance. Figure 6 Predictions of Equation 1 for each Subset plotted against observed stage-one processing time. Error bars indicate one standard error of the mean. The lines y = x and y = 1.3x were plotted for referencing purpose. Observed stage-one processing time was computed by subtracting target-only RT from RT of other conditions. The data points for Subset 1 fitted y = x line closely, while points for the other Subsets were close to the y = 1.3x line, except for a couple of points in Subset 3 and 4. ### Discussion Equation 1 provided the best predictions of heterogeneous search reaction time using log slope values estimated from homogeneous search data, just it did for simulated data. Thus, the predictive power of our theory was confirmed by empirical data. Therefore, Equation 1 represents a formula that will allow investigators to predict performance in heterogeneous search scenes. In the present study, Equation 1 accounted for 96.81% of the variance for a total of 20 different experimental conditions. This predictive success is all the more compelling given that predictions were based on parameter estimates from different participants. Further, since our simulation on both homogeneous and heterogeneous search assumed processing independence between individual items, systematic deviations from Equation 1’s predictions can be used to estimate quantitatively, for the first time, the extent and effect of homogeneity facilitation in efficient search tasks. The results indicated a systematic multiplicative deviation that suggests that in homogeneous displays, identical items do interact in a facilitative fashion and are not truly independently processed. This facilitation effect can be characterized by a constant multiplicative factor that does not depend on set size. Because the general formula to describe the RT by set size functions in efficient visual search takes the form of RT = a + Dln (N), where N is set size, we can infer that the facilitation effect resulted in an underestimation of the D coefficients in our Experiment 1. And since D coefficients were found to be directly related to the thresholds of accumulators, we propose that the facilitation was a result of a multiplicative lowering of the thresholds between adjacent, identical items. We discuss this finding further in the General Discussion. The slope coefficients for all conditions combined, as well as for Subsets 2-5 all indicated a systematic multiplicative under-prediction by a factor somewhere between 1.2 and 1.3, in a fairly consistent pattern. It should be noted, however, that regression analysis on stage-one processing times for Subset 1 showed a slope coefficient that deviate from the other groups. It was much closer to 1 (estimate = 0.9554, standard error = 0.0938). Recall that Subset 1 also had a cluster of RTs that substantially differed from the other Subsets, as shown in Figure 5. Finally, it is also important to acknowledge that for Subset 3, although numerically larger than 1, the estimated slope coefficient was not significantly different from 1 (estimate = 1.2063, standard error = 0.1179). That said, we still view the results from Subset 3 as being in line with our interpretation of multiplicative lure-to-lure interaction effects in homogeneous displays. We think the slope coefficient failed to reach significance due to the relatively large standard error in that condition. It may also be that simply because of a lack of power, we could not simultaneously detect that all four slopes for Subsets 2, 3, 4 and 5 were greater than one (see Francis, 2012). That leaves open the question of why the slope for Subset 1 was so different from the slope of all the other subsets: why was the multiplicative effect of homogeneity absent in Subset 1? It is worth recalling that Subset 1 displays were constructed with mixtures of reindeer and model cars. Both of these lures have very low levels of similarity to the target, as indexed by their D coefficients from Experiment 1. In fact, their D values are very close to each other. This can be interpreted as reflecting that, in spite of reindeer and model cars being clearly different stimuli, they are both equally dissimilar to the target teddy bear. More data is needed to understand why Subset 1’s results differed from those of the other Subsets. There are at least two possible explanations. The first is that, at extremely low levels of lure-target similarity, homogeneity facilitation effects are absent. If so, the D values observed in Experiment 1 are good predictors of performance in heterogeneous displays, simply because the D values truly represented stand-alone processing efficiency of those items. A second possibility is that, for this pair of stimuli, the same lure-to-lure interaction effects are present in both homogeneous and heterogeneous display. That is, perhaps when two types of lures are equally dissimilar to the target, they can mutually facilitate each other as if they were identical lures. If so, D values in Experiment 1 did reflect lure-to-lure suppression effects, but these values produced accurate predictions in Experiment 2 because mixing reindeer and cars (when looking for a teddy bear) allows for reindeer and cars to mutually facilitate each other to the same extent as when they are each presented in isolation. ## General Discussion Recent work in our lab has uncovered that there is systematic variability in stage-one processing times and that much can be learned about the architecture of early visual processing by studying this variability (Buetti et al., 2016). Typically, the literature has assumed that on fixed-target efficient visual search tasks, reaction times (RTs) do not meaningfully vary as a function of set size or other display characteristics. In a series of experiments, we demonstrated that RTs increase logarithmically as a function of the number of items in the display and further, that the steepness of the log function is modulated by the similarity between the target and the distractors. In the present study, we followed-up on this research and tested four different specific computational implementations of stage-one processing that produced specific RT predictions for different visual search conditions in heterogeneous scenes. Both computer simulations and human data indicated that Equation 1 was the best performing equation to predict stage-one completion times. This equation assumed parallel, unlimited capacity and exhaustive processing, with complete inter-item processing independence, as initially proposed in Buetti et al. (2016). Using data from homogeneous search tasks with real-world objects (Experiment 1), we were able to predict heterogeneous search RTs (Experiment 2), accounting for as much as 96.8% of the variance and with high precision, as indicated by the 14.520 ms RMSE. This prediction was made across participants: that is, parameters were estimated on one set of participants and predictions were confirmed on an entirely new set of participants that had never participated in one of these search experiments and who never saw any homogeneous displays like the ones used to estimate D parameters. The only common condition across experiments was the target-only condition. Finally, we used systematic deviations from the predictions of our model to estimate quantitatively, for the first time in the literature, the effects of homogeneity facilitation on performance in homogeneous displays, similar to the ones traditionally used in the literature to study efficient, a.k.a. pop-out, search (all elements identical but one). We found evidence that in homogeneous displays, there is a facilitatory processing effect whereby evidence thresholds are systematically reduced. This results in an improvement in overall search efficiency (in logarithmic space) in homogeneous scenes and thus, D coefficients estimated in homogeneous scenes end up under-predicting performance, by a multiplicative factor, in heterogeneous scenes where lure-to-lure interactions are absent (or much reduced). ### Implications regarding homogeneity effects in search The idea that the degree of heterogeneity (or homogeneity) in a scene influences visual search processing efficiency is not new. Duncan and Humphreys (1989) referred to it as the nontarget-nontarget similarity effect. Their claim that nontarget-nontarget similarity increases processing efficiency was based on their Experiments 3 and 4. However, in both experiments, search slopes for homogeneous displays were collapsed across different distractors and compared to heterogeneous search slopes, which were also collapsed across easy and difficult conditions in Experiment 4. Perhaps most important, there was not a direct manipulation of the degree of heterogeneity in these experiments: there was always a nearly equal number of both types of distractors in all heterogeneous scenes. Thus, the evidence in Duncan and Humphreys is in fact quite limited to an observed difference between homogeneous scenes and a specific type of heterogeneous scene (a 50-50 mix of items). Duncan and Humphreys proposed that the search slope should increase continuously as the degree of nontarget-nontarget similarity decreases (Figure 3, Duncan & Humphreys, 1989), but there was no direct evidence for this continuum. In contrast, here we conducted a more systematic evaluation of heterogeneity and differences in processing between heterogeneous and homogeneous scenes. We analyzed homogeneous search separately for three different types of lures and designed displays with varying degrees of heterogeneity using those stimuli. Whereas Duncan and Humphreys suggested increasing linear search slopes with increasing degree of heterogeneity, we found that different mixtures (e.g. mixing two or three types of lure items) can be accounted for by a single constant factor (around 1.3). This result suggests that processing in heterogeneous scenes is somewhat insensitive to variations in the types of heterogeneity in those scenes. This finding somewhat contradicts the Duncan and Humphreys’ spreading suppression account of homogeneity, because according to their account, the efficiency with which items are processed ought to be affected by distractor context (i.e. the composition of distractor set), whereas our results suggests it is not. Granted, Duncan and Humphreys had theorized this modulation of search efficiency as occurring in stage two, whereas, here, we only focused on changes in efficiency during stage-one processing. More data are needed to continue evaluation of these conclusions. At the theoretical level, according to Duncan and Humphreys, items are given different attentional weights or different amounts of resources from a limited pool, depending on their similarity to the target template. Items (or ‘structural units’) compete for access to visual short-term memory by their weight. Further, the more items perceptually group with each other, the stronger the weights of those items will covary. This spreading suppression mechanism thus entails an overall bias (i.e., weight) to rejected grouped items that is a result of lower-level perceptual grouping mechanisms. Importantly, spreading suppression is not a form of lateral inhibition. But rather, it is a description of how the weight given to an item will “spread” to other items as a function of the strength of grouping between those items. Homogeneous scenes therefore produce faster RTs than heterogeneous scenes because the grouping strength amongst elements in homogeneous scenes is much stronger than in heterogeneous scenes. The spreading suppression account was thought to be a further advance from the traditional ‘perceptual grouping’ accounts (Bundesen & Pedersen, 1983; Farmer & Taylor, 1980) because it described how grouping strength affected attentional priorities. There is reason, however, to doubt this spreading suppression account, at least in this simple form, because our results (here and in Buetti et al., 2016) demonstrate that items are not rejected as groups in homogeneous scenes as proposed by Duncan and Humphreys. Rather, the fact that parallel search exhibits an exhaustive processing rule (and logarithmic efficiency) implies that every element in a scene matters (with each additional element contributing a non-zero cost to RT), in spite of whatever grouping effects might be observed amongst lures. As a result, our results imply that a mechanism different from Duncan and Humphreys spreading suppression is at play in homogeneous search. We foresee at least two possible mechanisms. First, it is possible that instead of grouping similar search items, decisions are still made for each individual item, but adjacent identical distractor items facilitate each other by reducing the amount of information needed (i.e. threshold of accumulators) to reach a decision of rejection. This lowering of the thresholds could be due to the knowledge that only a single target exists in the display, which implies that for any-two adjacent items, the more similar they are to each other, the less likely it is that either of them is the target. This can be easily tested, for example, by controlling how often two identical items appear next to each other in a heterogeneous search scene. One possible extreme is when scenes consist of homogeneous regions, each containing a different type of lure, so that within each region, all adjacent items are identical to each other, and facilitation over the search scene should be maximized. The opposite extreme case would be when different types of lures are carefully ‘interlaced’ with each other so that adjacent lures are always different from one another. Our first hypothesis would predict that when facilitation is maximized, stage-one processing time should be nearly perfectly predicted by homogeneous search coefficients (i.e., the slope estimates reported in Table 6 should all be close to 1). On the other hand, when such inter-item facilitation is minimized, stage-one processing time should deviate even further from the predictions based on homogeneous search data and Equation 1 (i.e., the slope estimates should be larger than the ones reported in Table 6). Alternatively, the lowering of thresholds for identical items could reflect the presence of an evidence monitoring mechanism. An evidence monitoring mechanism is one that observes (i.e., monitors) how evidence accumulates at all local accumulators and sums up (or averages) evidence over all (or large) regions of the scene, much like global motion detectors sum/average local motion signals to extract a global motion direction. Applied to lure processing, as information accumulates, regions containing identical lure items will produce stronger evidence against target presence, compared to regions containing different lure items. Thus, homogeneous regions can then be discarded sooner as being unlikely to contain the target. Precise location information would not be available for these global accumulators because they represent large regions, but that is not a big problem: representing lure-location information is unnecessary for task completion, what is needed, rather, is a representation of the target location. Rejecting large regions of the display as unlikely to contain the target does help to reduce the uncertainty about the target location. Further, an advantage of such an evidence monitoring mechanism is that it can facilitate the orienting response towards regions that are more likely to contain the target (if one is present). This might happen even before evidence accumulation for all items within the region containing the target completes. In other words, imagine a scene where low lure-target similarity items are to the left of fixation and high lure-target similarity items are to the right (where the target is). On average, the left region will finish processing sooner than the right region. Once the left region is rejected, the eyes can start moving to the right of fixation, even before information about the specific target location is represented. This possibility too can be tested in future work. It is also interesting to consider how our results relate to other accounts of heterogeneity effects in visual search. The Signal Detection Theory model of visual search has been shown to offer a natural explanation of the heterogeneity effect (Palmer, Verghese, & Pavel, 2000), based on the increased external noise as a result of heterogeneity in the search scene. This account assumes that representations of individual items are independent, while the heterogeneity effect arises from statistical influences of a reduced signal-to-noise ratio in a decision stage. This independence assumption is different from our current proposal as well as Duncan & Humphreys (1989)’s. The Attention based on Information Maximization model (AIM) of visual saliency proposed by Bruce & Tsotsos (2009) also accounted for common heterogeneity effects gracefully, based on the idea that when distractor set is heterogeneous, each item is intrinsically rarer than when the set is homogeneous. Hence heterogeneity effect arises purely from bottom-up saliency computation according to AIM, without the comparison of search items to a target template. While it is not immediately clear how these models can account for our present finding that heterogeneity seems to introduce a relatively constant increase in search efficiency (within our experiment conditions), both models are highly specific and can make testable predictions with appropriate adjustments. Future work is needed to contrast these accounts with ours, although neither of the two models seem compatible with our basic finding of a logarithmic time cost function of stage-one processing. In sum, current data present a challenge to traditional views of the distractor homogeneity effects and suggest that further study is needed to understand the mechanisms underlying this search facilitation effect. A potential avenue for further testing this facilitative interaction between homogeneous items is through the use of the capacity coefficient (Townsend & Wenger, 2004; for an example of application, see Godwin, Walenchok, Houpt, Hout, & Goldinger, 2015), which could provide more direct evidence for the violation of the independent processing in stage one. ## Conclusion In a target discrimination task with a fixed target, efficient visual search is best characterized as arising from a system that processes all items in a parallel, unlimited capacity, and exhaustive fashion. Under this conceptualization, a lawful relationship between heterogeneous and homogeneous search performance was predicted by simulation and confirmed by experiments with a novel methodology. Results indicated that, rather than being completely independent, individual items facilitate each other’s processing when they appear in the context of other identical items. This facilitation effect can be characterized by a multiplicative factor in logarithmic space that does not change with set size. This result presents a challenge for traditional accounts of distractor homogeneity effects, like spreading suppression. These findings also extend the application of Buetti et al.’s (2016) theory to real-world objects and heterogeneous search tasks and demonstrate the computational specificity of our model of stage-one processing. Therefore, early parallel processing in visual search is non-trivial: it systematically contributes to reaction time, plays an important role in achieving the search goal, and can be mechanistically understood. More generally, this paper presents a novel approach for studying visual search: a predictive inference approach. While most studies in visual search draw mechanistic inferences based on descriptive data for a given set of manipulated conditions (i.e., the mean/slope in condition A is smaller than the mean/slope in condition B, therefore …), here we suggest that great experimental and theoretical validity is afforded by making specific predictive inferences to new experimental conditions. More specifically, making predictions about what processing times ought to be in heterogeneous displays allowed us to quantitatively estimate for the first time the effects of homogeneity facilitation, independently of other factors like lure-target similarity. Appendix A Conditions Used in Simulation. DOI: https://doi.org/10.1525/collabra.53.s1 ## Notes 1An additional underlying assumption is that measures of lure-target similarity generalize across subjects, at least at the group level. Thus, we can use the estimates from one set of participants and use them to predict performance in a new set of participants. 2With 2 types of items, for example, the essential integral to be solved takes the following form: $\begin{array}{l}\int {\left(1-\Phi \left(\sqrt{\frac{{\lambda }_{1}}{t}}\left(\frac{t}{{\mu }_{1}}-1\right)\right)-{e}^{\frac{2{\lambda }_{1}}{{\mu }_{1}}}\Phi \left(-\sqrt{\frac{{\lambda }_{1}}{t}}\left(\frac{t}{{\mu }_{1}}+1\right)\right)\right)}^{{n}_{1}-1}\hfill \\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\left(1-\Phi \left(\sqrt{\frac{{\lambda }_{2}}{t}}\left(\frac{t}{{\mu }_{2}}-1\right)\right)-{e}^{\frac{2{\lambda }_{2}}{{\mu }_{2}}}\Phi \left(-\sqrt{\frac{{\lambda }_{2}}{t}}\left(\frac{t}{{\mu }_{2}}+1\right)\right)\right)}^{{n}_{2}}\hfill \\ \text{\hspace{0.17em}\hspace{0.17em}}\left({e}^{\frac{-{\lambda }_{1}}{t}{\left(\frac{t}{{\mu }_{1}}-1\right)}^{2}}\text{​}+{e}^{\frac{2{\lambda }_{1}}{{\mu }_{1}}}{e}^{\frac{-{\lambda }_{1}}{t}{\left(\frac{t}{{\mu }_{1}}+1\right)}^{2}}\right)tdt\hfill \end{array}$ where n1, n2 are the numbers of two types of items, and λ1, µ1, λ2, µ2 are corresponding parameters, Φ(x) is the CDF of standard normal distribution. We welcome any ideas or suggestions about how to find an analytical solution to this problem. 3The measure of log likelihood is a measure of how likely the regression model is given the observed data. The higher log likelihood value, the more likely a specific model is. The relative likelihood ratio between two models can be computed by exp (L1L2) where L1 and L2 are log likelihood values. 4The relative likelihood ratio between two linear models can be computed using AIC values by the formula exp ((AIC1AIC2)/2). Thus the regression model based on Equation 1 was $\mathit{\text{exp}}\left(\frac{194.392-174.533}{2}\right)=20527.07$ times more likely than the Equation 2 based model, $\mathit{\text{exp}}\left(\frac{184.757-174.533}{2}\right)=166.00$ times more likely than the Equation 3 based model, and $\mathit{\text{exp}}\left(\frac{195.003-174.533}{2}\right)=27861.47$ times more likely than the Equation 4 based model. ## Competing Interests The authors have no competing interests to declare. ## References 1. Akaike, H. (1974). A new look at the statistical model identification. Automatic Control, IEEE Transactions on 19(6): 716–723, DOI: https://doi.org/10.1109/TAC.1974.1100705 2. Alexander, R. G. and Zelinsky, G. J. (2011). Visual similarity effects in categorical search. Journal of Vision 11(8): 9–9, DOI: https://doi.org/10.1167/11.8.9 3. Awh, E., Barton, B. and Vogel, E. K. (2007). Visual working memory represents a fixed number of items regardless of complexity. Psychological science 18(7): 622–628, DOI: https://doi.org/10.1111/j.1467-9280.2007.01949.x 4. Bravo, M. J. and Nakayama, K. (1992). The role of attention in different visual-search tasks. Perception & psychophysics 51(5): 465–472, DOI: https://doi.org/10.3758/BF03211642 5. Breitmeyer, B. G. (1992). Parallel processing in human vision: History, review, and critique. 6. Bruce, N. D. and Tsotsos, J. K. (2009). Saliency, attention, and visual search: An information theoretic approach. Journal of vision 9(3): 5–5, DOI: https://doi.org/10.1167/9.3.5 7. Buetti, S., Cronin, D. A., Madison, A. M., Wang, Z. and Lleras, A. (2016). Towards a better understanding of parallel visual processing in human vision: Evidence for exhaustive analysis of visual information. Journal of Experimental Psychology: General 145(6): 672–707, DOI: https://doi.org/10.1037/xge0000163 8. Bundesen, C. (1990). A theory of visual attention. Psychological review 97(4): 523.DOI: https://doi.org/10.1037/0033-295X.97.4.523 9. Bundesen, C. and Pedersen, L. F. (1983). Color segregation and visual search. Perception & Psychophysics 33(5): 487–493, DOI: https://doi.org/10.3758/BF03202901 10. Bylinskii, Z., Judd, T., Durand, F., Oliva, A. and Torralba, A. (2014). Mit saliency benchmark. 11. Chhikara, R. (1988). The Inverse Gaussian Distribution: Theory: Methodology, and Applications. 95CRC Press. 12. Chong, S. C. and Treisman, A. (2005a). Statistical processing: Computing the average size in perceptual groups. Vision research 45(7): 891–900, DOI: https://doi.org/10.1016/j.visres.2004.10.004 13. Chong, S. C. and Treisman, A. (2005b). Attentional spread in the statistical processing of visual displays. Perception & Psychophysics 67(1): 1–13, DOI: https://doi.org/10.3758/BF03195009 14. Cockrill, P. (2001). The teddy bear encyclopedia. New York: DK Publishing. 15. Duncan, J. and Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological review 96(3): 433.DOI: https://doi.org/10.1037/0033-295X.96.3.433 16. Farmer, E. W. and Taylor, R. M. (1980). Visual search through color displays: Effects of target-background similarity and background uniformity. Perception & Psychophysics 27(3): 267–272, DOI: https://doi.org/10.3758/BF03204265 17. Faul, F., Erdfelder, E., Lang, A. G. and Buchner, A. (2007). G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior research methods 39(2): 175–191, DOI: https://doi.org/10.3758/BF03193146 18. Fei-Fei, L., Iyer, A., Koch, C. and Perona, P. (2007). What do we perceive in a glance of a real-world scene?. Journal of vision 7(1): 10–10, DOI: https://doi.org/10.1167/7.1.10 19. Francis, G. (2012). Publication bias and the failure of replication in experimental psychology. Psychonomic Bulletin & Review 19(6): 975–991, DOI: https://doi.org/10.3758/s13423-012-0322-y 20. Godwin, H. J., Walenchok, S. C., Houpt, J. W., Hout, M. C. and Goldinger, S. D. (2015). Faster than the speed of rejection: Object identification processes during visual search for multiple targets. Journal of experimental psychology: human perception and performance 41(4): 1007.DOI: https://doi.org/10.1037/xhp0000036 21. Haberman, J. and Whitney, D. (2009). Seeing the mean: ensemble coding for sets of faces. Journal of Experimental Psychology: Human Perception and Performance 35(3): 718.DOI: https://doi.org/10.1037/a0013899 22. Itti, L. and Baldi, P. (2009). Bayesian surprise attracts human attention. Vision research 49(10): 1295–1306, DOI: https://doi.org/10.1016/j.visres.2008.09.007 23. Itti, L. and Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision research 40(10): 1489–1506, DOI: https://doi.org/10.1016/S0042-6989(99)00163-7 24. Itti, L. and Koch, C. (2001). Computational modelling of visual attention. Nature reviews neuroscience 2(3): 194–203, DOI: https://doi.org/10.1038/35058500 25. Jonides, J. and Gleitman, H. (1972). A conceptual category effect in visual search: O as letter or as digit. Perception & Psychophysics 12(6): 457–460, DOI: https://doi.org/10.3758/BF03210934 26. Kahneman, D., Treisman, A. and Gibbs, B. J. (1992). The reviewing of object files: Object-specific integration of information. Cognitive psychology 24(2): 175–219, DOI: https://doi.org/10.1016/0010-0285(92)90007-O 27. Kleiner, M., Brainard, D., Pelli, D., Ingling, A., Murray, R. and Broussard, C. (2007). What’s new in Psychtoolbox-3. Perception 36(14): 1. 28. Kruthiventi, S. S., Ayush, K. and Babu, R. V. (2015). DeepFix: A Fully Convolutional Neural Network for predicting Human Eye Fixations. arXiv preprint arXiv:1510.02927, 29. Li, F. F., VanRullen, R., Koch, C. and Perona, P. (2002). Rapid natural scene categorization in the near absence of attention. Proceedings of the National Academy of Sciences 99(14): 9596–9601, DOI: https://doi.org/10.1073/pnas.092277599 30. Lleras, A., Madison, A., Cronin, D., Wang, Z. and Buetti, S. (2015). Towards a better understanding of the role of parallel attention in visual search. Journal of vision 15(12): 1255–1255, DOI: https://doi.org/10.1167/15.12.1255 31. Luck, S. J. and Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature 390(6657): 279–281, DOI: https://doi.org/10.1038/36846 32. Najemnik, J. and Geisler, W. S. (2008). Eye movement statistics in humans are consistent with an optimal search strategy. Journal of Vision 8(3): 4–4, DOI: https://doi.org/10.1167/8.3.4 33. Neider, M. B. and Zelinsky, G. J. (2008). Exploring set size effects in scenes: Identifying the objects of search. Visual Cognition 16(1): 1–10, DOI: https://doi.org/10.1080/13506280701381691 34. Nordfang, M. and Wolfe, J. M. (2014). Guided search for triple conjunctions. Attention, Perception, & Psychophysics 76(6): 1535–1559, DOI: https://doi.org/10.3758/s13414-014-0715-2 35. Oliva, A. (2005). Gist of the scene. Neurobiology of attention 696(64): 251–258, DOI: https://doi.org/10.1016/B978-012375731-9/50045-8 36. Palmer, J., Ames, C. T. and Lindsey, D. T. (1993). Measuring the effect of attention on simple visual search. Journal of Experimental Psychology: Human Perception and Performance 19(1): 108.DOI: https://doi.org/10.1037/0096-1523.19.1.108 37. Palmer, J., Verghese, P. and Pavel, M. (2000). The psychophysics of visual search. Vision research 40(10): 1227–1268, DOI: https://doi.org/10.1016/S0042-6989(99)00244-8 38. Parkes, L., Lund, J., Angelucci, A., Solomon, J. A. and Morgan, M. (2001). Compulsory averaging of crowded orientation signals in human vision. Nature neuroscience 4(7): 739–744, DOI: https://doi.org/10.1038/89532 39. Pashler, H. (1992). Attentional limitations in doing two tasks at the same time. Current Directions in Psychological Science 1(2): 44–48, DOI: https://doi.org/10.1111/1467-8721.ep11509734 40. Potter, M. C. (1976). Short-term conceptual memory for pictures. Journal of experimental psychology: human learning and memory 2(5): 509.DOI: https://doi.org/10.1037/0278-7393.2.5.509 41. Potter, M. C. and Levy, E. I. (1969). Recognition memory for a rapid sequence of pictures. Journal of experimental psychology 81(1): 10.DOI: https://doi.org/10.1037/h0027470 42. Rosenholtz, R., Huang, J., Raj, A., Balas, B. J. and Ilie, L. (2012). A summary statistic representation in peripheral vision explains visual search. Journal of vision 12(4): 14–14, DOI: https://doi.org/10.1167/12.4.14 43. Schyns, P. G. and Oliva, A. (1994). From blobs to boundary edges: Evidence for time-and spatial-scale-dependent scene recognition. Psychological science 5(4): 195–200, DOI: https://doi.org/10.1111/j.1467-9280.1994.tb00500.x 44. Shapiro, K. L., Raymond, J. E. and Arnell, K. M. (1997). The attentional blink. Trends in cognitive sciences 1(8): 291–296, DOI: https://doi.org/10.1016/S1364-6613(97)01094-2 45. Sigman, M. and Dehaene, S. (2008). Brain mechanisms of serial and parallel processing during dual-task performance. The Journal of neuroscience 28(30): 7585–7598, DOI: https://doi.org/10.1523/JNEUROSCI.0948-08.2008 46. Sperling, G. (1960). The information available in brief visual presentations. Psychological monographs: General and applied 74(11): 1.DOI: https://doi.org/10.1037/h0093759 47. Townsend, J. T. and Ashby, F. G. (1983). Stochastic modeling of elementary psychological processes. CUP Archive. 48. Townsend, J. T. and Wenger, M. J. (2004). A theory of interactive parallel processing: new capacity measures and predictions for a response time inequality series. Psychological review 111(4): 1003.DOI: https://doi.org/10.1037/0033-295X.111.4.1003 49. Treisman, A. M. and Gelade, G. (1980). A feature-integration theory of attention. Cognitive psychology 12(1): 97–136, DOI: https://doi.org/10.1016/0010-0285(80)90005-5 50. Verghese, P. (2001). Visual search and attention: A signal detection theory approach. Neuron 31(4): 523–535, DOI: https://doi.org/10.1016/S0896-6273(01)00392-0 51. Vig, E., Dorr, M. and Cox, D. (2014). Large-scale optimization of hierarchical features for saliency prediction in natural images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. : 2798–2805, DOI: https://doi.org/10.1109/cvpr.2014.358 52. Vogel, E. K., Luck, S. J. and Shapiro, K. L. (1998). Electrophysiological evidence for a postperceptual locus of suppression during the attentional blink. Journal of Experimental Psychology: Human Perception and Performance 24(6): 1656.DOI: https://doi.org/10.1037/0096-1523.24.6.1656 53. Wolfe, J. M. (1994). Guided search 2.0 a revised model of visual search. Psychonomic bulletin & review 1(2): 202–238, DOI: https://doi.org/10.3758/BF03200774 54. Wolfe, J. M., Alvarez, G. A., Rosenholtz, R., Kuzmova, Y. I. and Sherman, A. M. (2011). Visual search for arbitrary objects in real scenes. Attention, Perception, & Psychophysics 73(6): 1650–1671, DOI: https://doi.org/10.3758/s13414-011-0153-3 55. Wolfe, J. M. and Horowitz, T. S. (2004). What attributes guide the deployment of visual attention and how do they do it?. Nature reviews neuroscience 5(6): 495–501, DOI: https://doi.org/10.1038/nrn1411 56. Zelinsky, G. J. (2008). A theory of eye movements during target acquisition. Psychological review 115(4): 787.DOI: https://doi.org/10.1037/a0013118 57. Zhang, J. and Sclaroff, S. (2013). Saliency detection: A boolean map approach. Proceedings of the IEEE International Conference on Computer Vision. : 153–160, DOI: https://doi.org/10.1109/iccv.2013.26
2019-02-17 17:44:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4821796417236328, "perplexity": 1632.7942326749394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482347.44/warc/CC-MAIN-20190217172628-20190217194628-00436.warc.gz"}
http://math.stackexchange.com/questions/142354/convex-pentagons-are-similar-if-conformally-equivalent?answertab=active
# Convex pentagons are similar if conformally equivalent. The problem: Suppose two convex pentagons $A$ and $B$ have equal interior angles (that is, $A=A_1A_2A_3A_4A_5$ and $B=B_1B_2B_3B_4B_5$) with $\angle A_j =\angle B_j$ for each $j\in\{1,\ldots,5\}$). Suppose that $\mbox{int}(A) \approx \mbox{int}(B)$ are conformally equivalent with a biholomorphism $f:\mbox{int}(A) \rightarrow \mbox{int}(B)$ whose continuous extension to the boundary maps $A_i\overset{f}{\mapsto}B_i$. Show that under these conditions, $A$ and $B$ are similar. Ideas: I would suspect the reflection principle would be applicable, but I'm not certain how to work out the proof. - Both are conformally equivalent to the open unit disk, and are conformally equivalent to each other in any case. –  Will Jagy May 7 '12 at 21:51 Anyway, that's the Riemann Mapping Theorem for simply connected regions. The thing you are asked to prove is false. I am not entirely sure what would be a sensible question. Where did you get this? –  Will Jagy May 7 '12 at 22:34 I added a crucial hypothesis. The continuous extension to the boundary (which by correspondence of boundaries maps boundary to boundary), should map vertices to vertices. It was an optional exercise suggested after we did a similar problem on annuli in a complex analysis course. –  Marcel T. May 8 '12 at 5:26 Maybe use a Schwarz-Christoffel mapping to identify each pentagon with the upper half plane? Automorphisms of the upper half plane have a specific form. –  WimC May 8 '12 at 5:39 Let $f_1,f_2$ be conformal maps of the upper half-plane onto your polygons, and $f$ is your map between the polygons. Then $g:=f_1^{-1}\circ f\circ f_2$ is a conformal automorphism of the upper half-plane. Now consider the functions $f_2$ and $f_1\circ g$. Both map the upper half-plane to polygons, and there are $5$ points $a_1,\ldots,a_5$ on the real line such that for each $k$, both $f_2$ and $f_1\circ g$ map $a_k$ to vertices of their respective polygons, and the interior angles at these two vertices are the same angle. Now both $f_2$ and $f_1\circ g$ must be represented by the Schwarz-Christoffel formula with the same singularities and same angles. But angles and and singularities determine the Schwarz--Christoffel formula up to a composition with an affine map. Therefore $$f_2=Af_1\circ g+B$$ which proves the statement. Remarks. 5 is irrelevant. Convexity is also irrelevant. All we need is that the conformal map $f$ between the polygons sends vertices to vertices and the interior angles at the corresponding vertices are equal.
2015-05-25 12:17:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8528455495834351, "perplexity": 237.09787523178258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928486.86/warc/CC-MAIN-20150521113208-00099-ip-10-180-206-219.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/183381-line-integral-over-circle-how-do.html
Math Help - Line integral over a circle, how to do it? 1. Line integral over a circle, how to do it? Suppose I want to evaluate the moment of inertia of a circle (or ring or hoop) The moment of inertia is simply this integral $I=\int \rho^{2}dm$, where $\rho$ is the distance from the axis or pivot and m is the mass. Let me do a simple integration of a ring when the axis is in the center of the circle. Then $dm=\frac{M}{2\pi R}ds$ $y=\sqrt{R^{2}-x^{2}}$ $y=-\sqrt{R^{2}-x^{2}}$ $I=\int \rho^{2}dm=\int (x^{2}+y^{2})dm=\int_{lower}R^{2}\frac{M}{2\pi R}ds+\int_{upper}R^{2}\frac{M}{2\pi R}ds=R^{2}\frac{M}{2\pi R}\pi R+R^{2}\frac{M}{2\pi R}\pi R=MR^{2}$ This was quite easy. Now suppose I put the axis at the bottom of the circle. Then $y=\sqrt{R^{2}-x^{2}}+R$ $y=-\sqrt{R^{2}-x^{2}}+R$ $I=\int \rho^{2}dm=\int (x^{2}+y^{2})dm=\int_{lower}(R^{2}-x^{2}+R^{2}-2R\sqrt{R^{2}-x^{2}}+x^{2})\frac{M}{2\pi R}ds+\int_{upper}(R^{2}-x^{2}+R^{2}+2R\sqrt{R^{2}-x^{2}}+x^{2})\frac{M}{2\pi R}ds=\frac{M}{\pi}\int_{lower}(R-\sqrt{R^{2}-x^{2}})ds+\frac{M}{\pi}\int_{upper}(R+\sqrt{R^{2}-x^{2}})ds$ Now how do I define ds in a good way to calculate this integral. Is it as simple as letting x run from -R to R? No it cant be! 2. Re: Line integral over a circle, how to do it? Ok, I solved the problem, so maybe i should post the rest in the calculus forum. Anyway this is what i did: $ds=\sqrt{1+(\frac{dy}{dx})^{2}}$ 3. Re: Line integral over a circle, how to do it? Originally Posted by fysikbengt Ok, I solved the problem, so maybe i should post the rest in the calculus forum. Anyway this is what i did: $ds=\sqrt{1+(\frac{dy}{dx})^{2}}$ I assume you mean $ds= \sqrt{1+ \left(\frac{dy}{dx}\right)^2}dx$ Another thing you could do is use polar coordinates. $x= cos(\theta)$, $y= sin(\theta)$. Then $ds= \sqrt{\left(\frac{dx}{d\theta}\right)^2+ \left(\frac{dy}{d\theta}\right)^2}= d\theta$. 4. Re: Line integral over a circle, how to do it? Originally Posted by HallsofIvy I assume you mean $ds= \sqrt{1+ \left(\frac{dy}{dx}\right)^2}dx$ Another thing you could do is use polar coordinates. $x= cos(\theta)$, $y= sin(\theta)$. Then $ds= \sqrt{\left(\frac{dx}{d\theta}\right)^2+ \left(\frac{dy}{d\theta}\right)^2}= d\theta$. Yes that sounds more clever. I had to change variables anyway. After a few hours I managed to solve it only to find there is a theorem (the parallell axis theorem) which made the whole calculation obsolete. But I dont regret anything.
2015-08-29 00:53:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258044958114624, "perplexity": 292.74604616208524}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00075-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.darwinproject.ac.uk/letter/?docId=letters/DCP-LETT-2579.xml&query=1862
# From J. D. Hooker   [12 December 1859]1 Kew Monday Dr. Darwin You have I know been drenched with letters since the publication of your book & I have hence forborne to add my mite— I hope now that you are well through Ed. II. & I have heard that you were flourishing in London. I have not yet got $\frac{1}{2}$ through the book, not from want of will, but of time—for it is the very hardest book to read to full profit that I ever tried—it is so cram full of matter & reasoning— I am all the more glad that you have published in this form—for the 3 vols—unprefaced by this would have choked any Naturalist of the 19th. Century & certainly have softened my brain in the operation of assimilating their contents.2 I am perfectly tired of marvelling at the wonderful amount of facts you have brought to bear & your skill in marshalling them & throwing them on the enemy—it is also extremely clear as far as I have gone, but very hard to fully appreciate. Somehow it reads very different from the mss. & I often fancy that I must have been very stupid not to have more fully followed it in mss.3 Lyell told me of his criticisms I did not appreciate them all, & there are many little matters I hope one day to talk over with you— I saw a highly flattering notice in the “English Churchman”—short & not at all entering into discussion but praising you & your book & talking patronizingly of the Doctrine!4 My mother who is still bedridden with inflammation in the cavity of the Tibia—has been reading it with much pleasure. Bentham & Henslow will still shake their heads I fancy, & Babington I hear does not think much of it!5 My Essay will be out this week I believe—the printers have delayed it disgracefully.6 I have no news that would interest you— We are all back in our house at Kew at last7 —& I am at my usual routine of Garden work half the day, Herbarium the other half, & the evenings for scientific work. I have again taken up the Arctic Flora, & am now interested in some curious points, particularly the absence on the whole Greenland coast, of a good many of the commonest plants of the W. side of Baffins bay, including species that are found all over Siberia, N. Europe Lapland, & Arctic N. America. I cannot comprehend this, for the struggle of Arctic plants is with the Elements much more than with one another & the Greenland climate ought to be peculiarly suitable to these absentees. That Greenland contains many species not found to the Westward of it is well known, & these being European forms is explicable. But why Ice bergs should not have carried certain common Arctic American plants to Greenland is an important consideration. The question has to be worked from several points of view & I will let you know the result.8 I was glad to meet Miss Darwin9 the other night & hear a fair account of you all Ever Yrs afft | Jos D Hooker P.S. I expect to think that I would rather be author of your book than of any other in Nat. Hist. Science. ## Footnotes Dated by the reference to CD ‘flourishing in London’. CD had broken his journey from Ilkley to Down in London, staying at Erasmus Alvey Darwin’s house from 7 to 9 December 1859 (‘Journal’; Appendix II). The Monday after CD’s visit was 12 December. CD still intended to publish his ‘big book’ on species. Hooker had read and commented on several chapters of CD’s species manuscript (see Correspondence vol. 6) as well as the equivalent material prepared for Origin (see letter to J. D. Hooker, 2 March [1859]). English Churchman, 1 December 1859, p. 1152. Hooker 1859. The Hookers were repainting their house, during which time Frances Harrriet Hooker and the children stayed with her father, J. S. Henslow, in Hitcham (L. Huxley ed. 1918, 1: 428 n. 1). Hooker read a paper on Arctic plants at a meeting of the Linnean Society on 21 June 1860. In it he applied CD’s theory of plant migrations during a former cold period to explain existing distribution patterns. The paper was published in 1862 (Hooker 1862). Henrietta Emma Darwin was staying in London at the home of her aunt and uncle, Frances Mackintosh and Hensleigh Wedgwood (Emma Darwin’s diary). ## Bibliography Correspondence: The correspondence of Charles Darwin. Edited by Frederick Burkhardt et al. 28 vols to date. Cambridge: Cambridge University Press. 1985–. Origin: On the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life. By Charles Darwin. London: John Murray. 1859. ## Summary JDH half through Origin. High praise for facts and reasoning. Lyell told JDH his criticisms: small matters JDH did not appreciate. Reactions of G. Bentham, J. S. Henslow, and C. C. Babington. ## Letter details Letter no. DCP-LETT-2579 From Joseph Dalton Hooker To Charles Robert Darwin Sent from Kew Source of text DAR 100: 137–8 Physical description ALS 4pp †
2022-05-17 14:50:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5022633671760559, "perplexity": 4728.781852860039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517485.8/warc/CC-MAIN-20220517130706-20220517160706-00778.warc.gz"}
http://www.kerhuel.eu/w/index.php?title=Explorer_16_Board&oldid=1180
Matlab-Simulink device driver Blockset for Microchip dsPIC / PIC24 / PIC32 Microcontrollers. # Explorer 16 Board (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Acknowledgement: Thanks to Microchip® for Providing the Explorer 16 Development Board with the REAL ICE programming system Explorer 16 Development Board equipped with dsPIC 33FJ256GP710 or PIC 24FJ128GA010 This example use the Explorer 16 Development Board with the dsPIC 33FJ256GP710 or the PIC 24JF128GA010 microcontroller. The model file is in the blockset install directory : examples\dsPIC_33f_Explorer16.mdl This model, with the dsPIC blockset, do : Simulink Model for Explorer 16 Board equipped with 33FJ256GP710 or PIC 24FJ128GA010 • One led blinking at 1Hz (Port A0) • One led (Port A7) : • blinking at 2Hz when the left button is pushed (Port D6) • Switch on or off depending on the position of the potentiometer (Port AN5) • Low pass Filtering sample value from the potentiometer (at 1Hz and 10Hz) • Logging and Plotting with Matlab and in real time the raw and filtered samples from AN5 : Potentiometer # Set UP • When starting from scratch, you must configure the simulink compiler: • Simulation ==> Configuration Parameters (Ctrl-E) • In the Real time workshop pannel, browse the System Target File and select 'dspic.tlc'. If you do not have 'dspic.tlc' in the list, the blockset is not properly installed. Download the last demo version and install it. • Add the 'Master block' from the 'Embedded Target for Mircochip dsPIC' library. • We set the model Time Step to 5ms: • Simulation ==> Configuration Parameters (Ctrl-E) • In the Solver pannel, set the fixed Step size to 0.005 (s). ## PIC Master block configuration Master block configuration Open the Master block. • First, Select the dsPIC used. Select the 33fj256GP710 (Selected by default) (This ship is provided with the Explorer 16 board). You can also select the PIC 24fJ128GA010 which is also provided with explorer 16 board and which has been tested. You could also use any other PIC. If the PIC target is modified, the model must be updated before compiling, or it must be compiled twice. • The dsPIC timer 1 will be used to set the main time step of the model. ( defined previously to 5ms in simulink). In "Real Time - Quartz" Tab, We set the oscillator mode to Quartz (XT-HS) and we set the Quartz to 8MHz. The quartz of the Explorer 16 board is 16MHz. We do not use the PLL here. If you wand to use the PLL, check it and set in the "Desired Instructions Per Secondes" the MIPS you wand. All Prescaler and PLL multiplier will be set automatically to achieve this desired MIPS. Verify however MIPS achieved in "Number Instructions Per seconds". • All Timer will be configured automatically (-1) # Set up PIC peripheral ### Digital Input (Push Button) and Output (Blinking Led) Digital Input Configuration Digital Output Configuration • The Digital Input block's output value is 0 while the button is pushed. The diagram make the LED blink when the button is pushed. • The Digital Output Write block switch on or off Leds connected to the PIN A0 or A7 of the dsPIC depending on the block's input values. The refresh rate of this block is 200Hz (5ms : Red). ### Analog to Digital Converter (ADC for potentiometer) Potentiometer is connected to the AN5 PIN. We used here the 10 bit ADC converter of the dsPIC but we using the 12 bits ADC is possible and switching from one to the other is as simple as selecting which one you want to use (one click solution !). The value converted is an unsigned 16 bit variable where only the 10 lower bites are used. The block create the variable 'ANmax' in the workspace that has the value of the max value taken by the output of the block. If using the 10bit ADC, the max value is 2^10 = 1024. When using the 12 bit ADC, the max value is 2^12 = 4096. The output value of the ADC block is compared to ANmax/2 that is the mid position of the potentiometer (corresponding to a tension of 1.65V). When the value converted is greater than ANmax/2, the led connected to the port A7 switched on otherwise it is switched off. ### UART UART Peripheral Configuration Data transmition is done using the UART dsPIC peripheral and one PC serial COM port. The demo board RS-232 serial port must be connected to one COM port of the PC. First of all, the baud rate of the UART must be set. For each baud selected, the Info text box show the real baud obtained the % error from the selected baud. The % error must be low (absolute value of error belo 3%) for the RS-232 serial transmition to work properly. Using a 8Mhz quartz with no PLL (obtaining 4 MIPS) the fastest workable baud rate is 19200. The UART port of the board with the appropriate max3232 component is connected to the UART 2 of the dsPIC. # Functions A counter is used to make blinkind leds. Its sample time set to 1 for 1 seconde makes it count at 1Hz. The model has blocks with a sample time of 5ms (model main sample time) and others with 1 seconde (which is a multiple (X200) of the model sample time). Thus, this is a multirate model. Color are used to see the different sample time (option in Format==> Ports/Signal display ==> 'Sample Time Color' . • Red blocks have a sampling rate of 5ms (200Hz) • Green blocks have a sampling rate of 1s (1Hz) The counter is configured to count from 0 to 3. Bitewise logic and extract bit 0 and bit 1 of the counter. These bits are sent to the Digital Output Write bock. Note : We could use two separate blocks : one for the port (A0 ) with a refreshing rate of 1Hz and another one for the third led (connected to A7) with a refreshing rate of 200Hz. ### Filtering The output value of the ADC block is also filtered with two different filters. One first order filter with a frequency cut off at 1Hz designed with the matlab command line >>c2d(tf([6],[1 6]),.005,'tustin');  %(need the Control System Toolbox) and one first order filter with a frequency cut off at 10Hz designed with the matlab command line >>c2d(tf([60],[1 60]),.005,'tustin') The matlab filter block used works using double data type. It needs double data types at its input and provides double data types at its output. Two conversion blocks are used. Note however that calculation using double are time consumming for the dsPIC and it is preferable to realize fixed point filter. The fixed point toolbox is very helpfull to make precise and efficient calculation using only fixed point variables. # Compile the Simulink to obtain the .hex / .elf file Once the model is done, C code is generated and compiled : click on the 'Incremental build' button or press 'Ctrl+b'. The .hex file obtained in the same directory of the. mdl file can be loaded into the dsPIC using MPLAB : go to MPLAB, File==> Import and select the .hex file, then, download it into your dsPIC using either ICD2, Real ICE or others... html report with the C code generated can be generated. Go to Simulation ==> Configuration Parameters ==> select Real Time Workshop on the left column and check the option Generate HTML report on the right pannel. The model was compiled with Matlab 2007. dsPIC 33FJ256GP710 24JF128GA010 [Report] [Report] [Hex file] [Hex file] # Log and Plot Data ### Pic side Send multiplexed Data to Matlab (or Labview) Once the UART is configured, the block to send data from the dsPIC to Matlab is configured. We set 3 channel to send 3 different data. At low level portocol, the three data are uint16 and needs 3 char each to be sent. All data will be logged since at each time step, 9.6 char can be sent and 9 are required (3X3). Data will be logged with the same sampling rate than the block : 200Hz. It is possible to use more channel but this will lead to data loss (not important for visualisation or debugging). Once the model is generated, compiled and the .hex file is loaded into the dsPIC, data will be logged and view with the matlab User Interface created. This one can be opened either in double clicking onto the block 'Interface Tx-Matlab' or typing at the matlab prompt >> rs232gui ### Computer side Raw and filtred data (at 1Hz and 10Hz) from the ADC 5 channel connected to the potentiometer Once the .hex file is loaded into the dsPIC, you get two led blinking at 1Hz and 0.5Hz and the Explorer 16 Demo board is connected to one PC COM port. RS232gui User Interface to log and plot data Open the rs232gui user interface ( click twice on the block 'Interface Tx-Matlab' or type rs232gui at the matlab prompt. Set the COM port to the one you are using (PC side) and set the baud rate to 19200 (same as dsPIC). No flow control used. Then,click on Connexion, wait 3 seconds and click on the Start button. On some PC, Matlab closed if you click on Connexion while data are being sent from the dsPIC to the PC. In that case, disconnect the dsPIC before pushing the Connexion button. Moving the Potentiometer, you will obtain the following graphic with three curves : the blue curve is the raw data (unfiltred), the red curve is the data filtred at 1Hz and the green curve is the data filtered at 10Hz. Animation showing Raw and filtred data (at 1Hz and 10Hz) from the ADC 5 channel connected to the potentiometer
2018-02-24 16:15:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3092176914215088, "perplexity": 4726.028634722805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815843.84/warc/CC-MAIN-20180224152306-20180224172306-00034.warc.gz"}
https://digitalblackboard.io/physics/high-school/mechanics/kinematics/quantities/
## Introduction • We begin our study of kinematics by defining the kinematic quantities considering straight line motion in one dimension, referred to as rectilinear motion. This means that the particle that moves is constrained to move along a straight line. ## Position and Displacement • In order to describe the motion of an object, you must first be able to describe its position, i.e. where it is at any particular time. More precisely, you need to specify its position relative to a convenient reference frame. Earth is often used as a reference frame, and we often describe the position of an object as it relates to stationary objects in that reference frame. • In rectilinear motion, we can simply use a number line (reference frame) to track the motion of an object. In the following example, we say that the position of the object is $x=1$, which corresponds to its coordinate on the number line. • Displacement is the change in position of an object from its initial position to its final position. It is a vector quantity that describes the shortest distance between the initial position ($x_0$) and final position ($x_f$) of an object. It is denoted by the symbol $\Delta x$. Displacement Displacement is the change in position of an object given as: $\boxed{\Delta{x} = x_f - x_0 }$ where $x_0$ is the initial position and $x_f$ is the final position. • Note that the SI unit for displacement is the meter (m). • Displacement has a direction as well as a magnitude. The direction is always pointing from the initial position to the final position. • In rectilinear motion, it suffices to know the sign of the displacement $\Delta x$ in order to know the direction in which the object is moving. A positive $\Delta x$ means that the object has moved in the direction of increasing $x$. A negative $\Delta x$ means that the object has moved in the direction of decreasing $x$. EXAMPLE : Computing displacements. A particle is at $x=1$m when the clock starts. At $t=1$s, it is recorded to be at $x=6$m and at $t=2$s, it is recorded to be at $x=-2$m. Find (a) the displacement of the particle from $t=0$s to $t=1$s. (b) the displacement of the particle from $t=0$s to $t=2$s. (c) the displacement of the particle from $t=1$s to $t=2$s. Solution (click to expand) ## Time • Every measurement of time involves measuring a change in some physical quantity. For example, we may be interested in how the position of the Sun in the sky changes with time. • Motion occurs when there is a change in the position of an object with respect to time. • An important quantity in Kinematics is called the elapsed time $\Delta t$ during which motion occurs. It is the difference between the ending time ($t_f$) and starting time ($t_0$). $$\boxed{\Delta t = t_f - t_0}$$ • To simplify calculations, we typically start our clock only when motion starts so that $t_0 = 0$ and let $t_f = t$. In this case, $\Delta t = t-0=t$. ## Distance and Average Speed • Distance or distance travelled is a scalar quantity that describes the magnitude of the path traveled by an object without considering its direction. • Average speed is defined as the distance traveled divided by elapsed time. $$\boxed{\text{Average Speed} = \frac{\text{Distance Travelled}}{\Delta t} }$$ EXAMPLE : Calculting distance and average speed. A particle is at $x=1$m when the clock starts. At $t=1$s, it is recorded to be at $x=6$m and at $t=2$s, it is recorded to be at $x=-2$m. Find (a) the distance travelled and displacement of the particle from $t=0$s to $t=2$s. (b) the average speed of the particle from $t=0$s to $t=2$s. Solution (click to expand) ## Average Velocity Average Velocity The average velocity during an interval of time $\Delta t$ is the ratio of the change in position $\Delta x$ (displacement) to the elasped time $\Delta t$. $\boxed{\overline{v} = \frac{\Delta x}{\Delta t}}$ • Note that the above definition indicates that velocity is a vector because displacement $\Delta x$ is a vector. Therefore, it has both a magnitude and a direction. In rectilinear motion, the direction is indicated by the sign. A positive sign means the displacement is in the positive direction, and vice versa. • The SI unit for velocity is meters per second or m/s. • The average velocity $\overline{v}$ over a time interval $\Delta t$ can be computed as the slope of the secant line joining two points on the position-time ($x$-$t$) graph corresponding to $\Delta t$. • The average velocity of an object does not tell us anything about its velocity between the starting and ending points. EXAMPLE : Calculating average speed and average velocity. A particle is at $x=1$m when the clock starts. At $t=1$s, it is recorded to be at $x=6$m and at $t=2$s, it is recorded to be at $x=-2$m. Find (a) the average speed of the particle from $t=0$s to $t=2$s. (b) the average velocity of the particle from $t=0$s to $t=2$s. Solution (click to expand) • The above example demonstrates that while the average speed tells us (on average) how fast the object was moving during the elapsed time ($\Delta t$), the average velocity may not always be meaningful for a finite time interval $\Delta t$. • Nevertheless, the average velocity is used to defined a more important quantity called the instantaneous velocity. ## Instantaneous Velocity and Speed • In the above definition of the average velocity $\overline{v}$, imagine we zoom into a small segment of the motion and compute $\overline{v}$. According to the definition, this will simply give us the average velocity over that small $\Delta t$. • Now, imagine we make the elapsed time $\Delta t$ so small that we are left with an infinitesimally small time interval. Over such an interval, the average velocity becomes the instantaneous velocity or the velocity at an instant. • Instantaneous velocity $v$ is the average velocity at a specific instant in time (or over an infinitesimally small time interval). Instantaneous Velocity The instantaneous velocity is a vector quantitiy that gives the speed and direction of an object at a specific instant $t$. Mathematically, it is the limit of the average velocity $\overline{v}$ as the elapsed time $\Delta t$ approaches zero: $\boxed{v(t) = \lim_{\Delta t \to 0} \frac{\Delta x}{\Delta t} = \frac{dx}{dt} }$ We say that $v(t)$ is the rate of change of position $x$ with respect to time $t$. • The instantaneous velocity is typically referred to simply as the velocity. • Note that the velocity is a function of time as is evident from the expression $v(t)$. • From Calculus, the velocity $\displaystyle v(t)=\frac{dx}{dt}$ is simply the slope (gradient of the tangent) of the $x$-$t$ graph at the instant $t$. • In the above $x$-$t$ graph, we consider the three time intervals: $$\Delta t_1 = t_6 - t_1, \quad \Delta t_2 = t_5 - t_2, \quad \Delta t_3 = t_4 - t_3$$ • Note that $\Delta t_3 < \Delta t_2 < \Delta t_1$. These time intervals correspond to the following displacements: $$\Delta x_1 = x_6 - x_1, \quad \Delta x_2 = x_5 - x_2, \quad \Delta x_3 = x_4 - x_3$$ • The average velocities corresponding to the three elapsed time intervals are computed as the slope of the secant line joining the starting and end points on the $x$-$t$ graph: $$\textcolor{red}{\overline{v}_1 = \frac{\Delta x_1}{\Delta t_1}}, \quad \textcolor{teal}{\overline{v}_2 = \frac{\Delta x_2}{\Delta t_2}}, \quad \textcolor{BlueViolet}{\overline{v}_3 = \frac{\Delta x_3}{\Delta t_3}}$$ • The instantaneous velocity $v(t_0)$ is the slope of the tangent line at $t=t_0$. According to the definition, as $\Delta t \to 0$, the average velocity approaches the instantaneous velocity $v(t_0)$ at $t=t_0$. • The above figure demonstrates that the slope of the secant line (average velocity) approaches the slope of the tangent line (instantaneous velocity) as $\Delta t \to 0$ at $t=t_0$ . EXAMPLE : Calculating the $v$-$t$ graph from the $x$-$t$ graph. Given the $x$-$t$ graph of an object below, find the corresponding velocity-time ($v$-$t$) graph. Solution (click to expand) During the time interval between $0$ s and $0.5$ s, the object’s position is moving away from the origin and the position-versus-time curve has a positive slope. At any point along the curve during this time interval, we can find the instantaneous velocity by taking its slope, which is $+1$ m/s. In the subsequent time interval, between $0.5$ s and $1.0$ s, the position doesn’t change and we see the slope is zero. From $1.0$ s to $2.0$ s, the object is moving back toward the origin and the slope is $−0.5$ m/s. The object has reversed direction and has a negative velocity. • Instantaneous speed ($s$) is just the magnitude of instantaneous velocity. $$\boxed{s= |v|}$$ • For example, suppose a runner at one instant has an instantaneous velocity of $−3.0$ m/s, and if we define the negative direction to the left, then we know the person is running towards the left with a speed of $3.0$ m/s. The velocity contains information on direction, but the speed is just a magnitude. • Once again, we typically refer to instantaneous speed simply as speed in subsequent sections. • In the above figure, the $x$-$t$ graph, $v$-$t$ graph and $s$-$t$ graph an object are shown. • We observe that the $v$-$t$ graph is obtained from the $x$-$t$ graph simply by computing its slope at two segments where the slope is constant i.e. $0$s to $0.25$s and $0.25$s to $0.5$s. • The $s$-$t$ graph is obtained from the $v$-$t$ graph by taking only absolute values from the $v$-$t$ graph. ## Average Acceleration • In everyday language, to accelerate means to speed up. The greater the acceleration, the greater the change in velocity over a given time. Average Acceleration The average acceleration during an interval of time $\Delta t$ is the ratio of the change in velocity $\Delta v$ to the elasped time $\Delta t$. $\boxed{\overline{a} = \frac{\Delta v}{\Delta t}}$ • Since acceleration is velocity in m/s divided by time in s, the SI unit for acceleration is m/s$^2$ (meters per second squared). • Note that acceleration changes due to a change in velocity ($\Delta v$) which is a vector quantity. Such a change can be due to either a change in direction or a change in magnitude of $v$. For example, if a car makes a turn at constant speed, there will still be a change in acceleration since direction changes. • The average acceleration $\overline{a}$ over a time interval $\Delta t$ can be computed as the slope of the secant line joining two points on the $v$-$t$ graph corresponding to $\Delta t$. EXAMPLE : Calculating average acceleration. A car accelerates from rest to a velocity of $15.0~$m/s due West in $1.80~$s. What is its average acceleration? Solution (click to expand) ## Negative Acceleration vs Deceleration • Deceleration is defined as the process of slowing down (speed decreasing). • What can you deduce from a negative average acceleration? Is the object slowing down (decelerating)? Not quite. • Negative acceleration $\ne$ deceleration. A negative average acceleration (from $\overline{a} = \frac{\Delta v}{\Delta t}$) means that the change in velocity is also negative. This can mean one of two scenarios from the table below: • object is travelling in the positive direction and slowing down. • object is travelling in the negative direction and speeding up. Table - Sign of acceleration and velocity in different scenarios. Scenario Velocity $v$ Speed $|v|$ Acceleration What It Means (a) Increasing Speeding Up in ➕ direction (b) Decreasing Slowing Down in ➕ direction (c) Decreasing Slowing Down in ➖ direction (d) Increasing Speeding Up in ➖ direction • We realize that if the signs for both acceleration and velocity are the same, then the object is speeding up. • Conversely, if the signs for both acceleration and velocity are different, then the object is slowing down. ## Instantaneous Acceleration • In the above definition of the average acceleration $\overline{a}$, imagine we zoom into a small segment of the motion and compute $\overline{a}$. According to the definition, this will simply give us the average acceleration over that small $\Delta t$. • Now, imagine we make the elapsed time $\Delta t$ so small that we are left with an infinitesimally small time interval. Over such an interval, the average acceleration becomes the instantaneous acceleration or the acceleration at an instant. • Instantaneous acceleration $a$ is the average acceleration at a specific instant in time (or over an infinitesimally small time interval). Instantaneous Acceleration The instantaneous acceleration is the limit of the average acceleration $\overline{a}$ as the elapsed time $\Delta t$ approaches zero: $\boxed{a(t) = \lim_{\Delta t \to 0} \frac{\Delta v}{\Delta t} = \frac{dv}{dt} }$ We say that $a(t)$ is the rate of change of velocity $v(t)$ with respect to time $t$. • The instantaneous acceleration is typically referred to simply as the acceleration. • From Calculus, the acceleration $a(t)= \frac{dv}{dt}$ is simply the slope (gradient of the tangent) of the $v$-$t$ graph at the instant $t$.
2023-03-30 01:27:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892039656639099, "perplexity": 274.7349298636743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00519.warc.gz"}
https://tools.carboncollective.co/present-value/31-in-87-years/
# Present Value of $31 in 87 Years When you have a single payment that will be made to you, in this case$31, and you know that it will be paid in a certain number of years, in this case 87 years, you can use the present value formula to calculate what that $31 is worth today. Below is the present value formula we'll use to calculate the present value of$31 in 87 years. $$Present\: Value = \dfrac{FV}{(1 + r)^{n}}$$ We already have two of the three required variables to calculate this: • Future Value (FV): This is the $31 • n: This is the number of periods, which is 87 years So what we need to know now is r, which is the discount rate (or rate of return) to apply. It's worth noting that there is no correct discount rate to use here. It's a very personal number than can vary depending on the risk of your investments. For example, if you invest in the market and you earn on average 8% per year, you can use that number for the discount rate. You can also use a lower discount rate, based on the US Treasury ten year rate, or some average of the two. The table below shows the present value (PV) of$31 paid in 87 years for interest rates from 2% to 30%. As you will see, the present value of $31 paid in 87 years can range from$0.00 to $5.54. Discount Rate Future Value Present Value 2%$31 $5.54 3%$31 $2.37 4%$31 $1.02 5%$31 $0.44 6%$31 $0.19 7%$31 $0.09 8%$31 $0.04 9%$31 $0.02 10%$31 $0.01 11%$31 $0.00 12%$31 $0.00 13%$31 $0.00 14%$31 $0.00 15%$31 $0.00 16%$31 $0.00 17%$31 $0.00 18%$31 $0.00 19%$31 $0.00 20%$31 $0.00 21%$31 $0.00 22%$31 $0.00 23%$31 $0.00 24%$31 $0.00 25%$31 $0.00 26%$31 $0.00 27%$31 $0.00 28%$31 $0.00 29%$31 $0.00 30%$31 $0.00 As mentioned above, the discount rate is highly subjective and will have a big impact on the actual present value of$31. A 2% discount rate gives a present value of $5.54 while a 30% discount rate would mean a$0.00 present value. The rate you choose should be somewhat equivalent to the expected rate of return you'd get if you invested \$31 over the next 87 years. Since this is hard to calculate, especially over longer periods of time, it is often useful to look at a range of present values (from 5% discount rate to 10% discount rate, for example) when making decisions. Hopefully this article has helped you to understand how to make present value calculations yourself. You can also use our quick present value calculator for specific numbers.
2022-06-27 13:50:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.219807431101799, "perplexity": 394.2673610382515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00555.warc.gz"}
https://isr-publications.com/jnsa/articles-1485-remarks-on-remotal-sets-in-vetor-valued-function-spaces
# REMARKS ON REMOTAL SETS IN VETOR VALUED FUNCTION SPACES Volume 2, Issue 1, pp 1-10 • 815 Views ### Authors M. SABABHEH - Department of Science and Humanities, Princess Sumaya University For Technology, Al Jubaiha, Amman 11941, Jordan.. R. KHALIL - Department of Mathematics, Jordan University, Al Jubaiha, Amman 11942, Jordan.. ### Abstract Let $X$ be a Banach space and $E$ be a closed bounded subset of $X$. For $x \in X$ we set $D(x,E) = \sup\{\| x − e \|: e \in E\}$. The set $E$ is called remotal in $X$ if for any $x \in X$, there exists $e \in E$ such that $D(x,E) = \| x − e \|$ . It is the object of this paper to give new results on remotal sets in $L^p(I,X)$, and to simplify the proofs of some results in [5]. ### Keywords • Remotal sets • Approximation theory in Banach spaces. •  46B20 •  41A50 •  41A65 ### References • [1] E. Asplund, Farthest points in reflexive locally uniformly rotund Banach spaces, Israel J. Math. , 4 (1966), 213-216. • [2] M. Baronti, P. Papini, Remotal sets revisited, Taiwanese J. Math., 5 (2001), 357-373. • [3] A. Boszany, A remark on uniquely remotal sets in C(K,X) , Period.Math.Hungar, 12 (1981), 11-14. • [4] E. Cheney, W. Light , Lecture notes in Mathematics, Springer-Verlag Berlin Heidelberg, (1985) • [5] R. Khalil, Sh. Al-Sharif, Remotal sets in vector valued function spaces, Scientiae Mathematicae Japonica, 63, No, 3 (2006), 433-441. • [6] S. Rolewicz, Functional analysis and control theory, D.Reidel publishing company, ( 1986.)
2020-02-27 11:55:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7382003664970398, "perplexity": 3838.8161003246228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146681.47/warc/CC-MAIN-20200227094720-20200227124720-00438.warc.gz"}
https://mathematics.huji.ac.il/event/nt-ag-seminar-shuddhodan-k-v-huji-self-maps-varieties-over-finite-fields?ref_tid=3830
# NT & AG Seminar: Shuddhodan K V (HUJI) "Self maps of varieties over finite fields" Date: Mon, 03/06/201914:30-15:30 Location: Ross building 70 Title: Title: Self maps of varieties over finite fields Abstract: Esnault and Srinivas proved that as in Betti cohomology over the complex numbers, the value of the entropy of an automorphism of a smooth proper surface over a finite field $\F_q$ is taken in the subspace spanned by algebraic cycles inside $\ell$-adic cohomology. In this talk we will discuss some analogous questions in higher dimensions motivated by their results and techniques.
2022-05-28 03:48:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.898095428943634, "perplexity": 720.3043352817286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00465.warc.gz"}
http://mathoverflow.net/questions/126309/the-etale-cohomologyring-structure-of-torsion-sheaves-on-varieties
# The etale cohomologyring" structure of torsion sheaves on varieties For a topological manifold $M$, one can speak of the cohomology ring structure $H^*(M, k)$ where $k$ is a ring. If one replace $M$ by an arithmetic schemes $X$ over a base ring $S$, and replace $k$ by a torsion sheaf $\mu_n$, then one can define the "cup product" $H^i(X, \mu_n)\times H^j(X, \mu_n)\to H^{i+j}(X, \mu_n^2)$. However, if $S$ contains the n-th root of unity, then one can "untwist" the torsions by the canonical isomorphism $\mathbb{Z}/n\cong \mu_n^r$. (E.g., One can view the isomorphism of etale sheaves $\mu_n\cong \mathbb{Z}/n$ as induced by the isomprhism of the corresponding group schemes that represent them. ) Then one indeed has a cohomological "ring" structure for $H^*(X, \mathbb{Z}/n)$. My question was, people don't seem to be using this information a lot to talk about properties of arithmetic schemes. Maybe I am wrong. To my knowledge, even when $X$ is an elliptic curve over a local field $\mathbb{Q}_p$, with coefficients $\mathbb{Z}/2$, not much is obvious to me. In particular, does the congruence condition on $p$ make a difference? (My real question was, how does "arithmetic" play with "geometry" in this sense?) I computed this "ring" structure for elliptic curves, but I am also worried that this may be trivial in the eye of the experts. I am not sure if MO will be a good place to post this, anyway. -
2015-05-24 21:41:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.922646701335907, "perplexity": 212.36261014968062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928078.25/warc/CC-MAIN-20150521113208-00226-ip-10-180-206-219.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/553295/is-the-hamiltonian-fully-defined-by-a-quantum-state-vector
# Is the Hamiltonian fully defined by a quantum state (vector)? [duplicate] From what I have read, the evolution of a quantum state is determined by the Hamiltonian (Schrodinger equation). However, I'm trying to understand if the Hamiltonian itself can be fully derived from the quantum state, or if it needs to be defined externally. From my understanding, the Hamiltonian includes information about the potential energy (when particles are interacting, etc), and that the laws of physics are actually "embedded" in the Hamiltonian, and not in the actual state vector. Is this correct, or does the state vector itself contain information about all the laws of physics? I hope my question is clear... Thanks! No. The state is determined by the preparation procedure, which is quite distinct and independent from the Hamiltonian. As additional reading on this I recommend this excellent article by a master in foundational issues: Peres, Asher. "What is a state vector?." American Journal of Physics 52.7 (1984): 644-650. The abstract is by itself enlightening: “ A state vector is not a property of a physical system (nor of an ensemble of systems). It does not evolve continuously between measurements, nor suddenly ‘‘collapse’’ into a new state vector whenever a measurement is performed. Rather, a state vector represents a procedure for preparing or testing one or more physical systems. No ‘‘quantum paradoxes’’ ever appear in this interpretation. The formulation of dynamical laws may involve path integrals and/or S‐matrix theory.” Edit: I understand the “state” question as meaning the initial state $$\vert\Psi(0)\rangle$$. • However, could one determine (at least something about) the Hamiltonian if we could perform many experiments where we could track the change in the state over time? Certainly this doesn't determine the Hamiltonian, but could this be used to "find it"? This is what I thought when I read in the question "I'm trying to understand if the Hamiltonian itself can be fully derived from the quantum state..." (I didn't down vote). – BioPhysicist May 19 '20 at 20:15 • @BioPhysicist I think the OP is asking if the Hamiltonian can somehow be read directly from the initial state. If you want to determine the Hamiltonian doing some experiments you can do it probably much more simply if you know how to measure energy (as long as you can find a tomographically complete set of observables). – Dvij D.C. May 19 '20 at 20:46 • @BioPhysicist If you only have one state to work with, the answer is no. To see why, suppose that the state you happen to have is a stationary state of the Hamiltonian. The only thing you can find out about the Hamiltonian in this case is that the state you have is one of its eigenstates; you learn nothing else about the structure. – probably_someone May 19 '20 at 20:50 • @BioPhysicist Yes, I agree. It's a bit unclear what OP means by "quantum state". – Dvij D.C. May 19 '20 at 20:53 • @hyportnex so I guess it's unclear as to "which state" we are talking about. – ZeroTheHero May 19 '20 at 22:55
2021-03-08 22:15:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7288163900375366, "perplexity": 351.2681432126464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385529.97/warc/CC-MAIN-20210308205020-20210308235020-00452.warc.gz"}
https://www.physicsforums.com/threads/conservation-of-momentum-elastic-collision.253797/
# Homework Help: Conservation of Momentum (Elastic Collision) 1. Sep 4, 2008 ### tanzl 1. The problem statement, all variables and given/known data A particle with mass m and speed v collides with another particle with mass m, which is initially stationary. The collision is elastic. Show that the particles travel in perpendicular directions after the collision. 2. Relevant equations 1)initial momentum = final momentum 2)initial kinetic energy = final kinetic energy 3. The attempt at a solution Assume that B is the moving particle and A is the stationary particle. By conservation of momentum, For the horizontal component of the motion mvi=mvfcos$$\theta$$+mufcos$$\phi$$ For the vertical component of the motion 0=mvfsin$$\theta$$+mufsin$$\phi$$ where vi = initial velocity of B vf = final velocity of B uf = final velocity of B $$\theta$$ = angle between vector A and horizontal axis $$\phi$$ = angle between vector B and horizontal axis By conservation of energy, $$\frac{1}{2}$$mvi2=$$\frac{1}{2}$$m(vf2+uf2) So, vi2=vf2+uf2 These are all the equations (3 equations 5 variables) I can get, But it seems that I have too many variable so I cant solve for $$\theta$$ and $$\phi$$. Is there any more assumption I need to make in order to solve the question? 2. Sep 4, 2008 ### LowlyPion Looks like a good start. You look armed and dangerous. 3. Sep 5, 2008 ### Clairefucious In these sorts of problems I find it useful to draw of diagram of the vectors. You know that the sum of all the x- and y-velocity vectors before and after the collison are equal which gives you the following options: y-velocity: uf*sin(phi) = - vf*sin(theta) Equation (1) [equal and opposite final velocities of the two particles in the y-direction] a) phi = 0, theta = 180 (in degrees) b) phi = 180, theta = 0 c) 0 <= phi <= 90, -90 <= theta <= 0 d) 0 <= theta <= 90, -90 <= phi <= 0 From conservation of kinetic energy, you can eliminate options a) and b) for these particles of mass 'm'. You can then choose either c) or d) without loss of generality. You then need to sub in Equation (1) to the equations you have for conservation of x-momentum and conservation of kinetic energy. It will require some funky algebra to solve the equation, but the trick is to place certain boundaries on what the final angles can be. Hope this helps 4. Sep 6, 2008
2018-10-19 13:30:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4062998294830322, "perplexity": 926.4798500249516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512400.59/warc/CC-MAIN-20181019124748-20181019150248-00231.warc.gz"}
http://physics.tutorcircle.com/motion/uniform-motion.html
Sales Toll Free No: 1-855-666-7446 # Uniform Motion Top Sub Topics The Motion plays a key role in physics. It tells about changing position of a moving body for a given time. There are basically two types of motion:Uniform MotionNon uniform MotionIf the motion is consistent with time we use the term uniform motion. It happens when a body has constant speed in a straight line. Lets study more about this. ## What is Uniform Motion? Let us see an illustration to understand the concept in a better way A roller is rolling on a wooden log with constant speed as shown in figure. You could see that in 1st second it covers the same distance as that in 2nd second or so on. For any nth the distance traveled will be the same. This is what uniform speed concept tells us! For any body moving in a straight line if it covers equal distance in equal intervals of time. Then the body is said to be in Uniform motion. It can be represented in graph Uniform speed graph: It is plotted for displacement (d) versus time (t). Here we could observe for every 1 second there is displacement of 5m for any interval. It shows the equal displacement in equal intervals of time. Hence the slope is given as Uniform acceleration graph: It is plotted for velocity (v) versus time (t). The v-t graph for uniform motion gives the constant acceleration as shown below ## Equations of Motion There are three equations of motions Where v = Final velocity u = Initial velocity a = acceleration t = time taken. ## Uniform Motion Problems Here are given the few problems on uniform motion you can go through it: ### Solved Examples Question 1: A bicycle is moving with a speed of 6 m/s is having an acceleration of 0.1 m/s2 comes to rest after 10 s. Calculate its final velocity. Solution: Given: Initial speed u = 6 m/s, Acceleration a = 0.1 m/s2, time taken t = 10 s The Final velocity is given by v = u + at = 6 + 0.1 $\times$ 10 = 6 + 1 = 7 m/s. The final velocity is 7 m/s. Question 2: Joe is on the ride. He stops in the traffic. When signal turns green he accelerates his bike from rest at the rate of 4m/s2. What will be the displacement after 10s? Solution: Given: Initial velocity u = 0, Acceleration a = 4 m/s2, time taken t = 10 s The displacement is given by S = ut + $\frac{1}{2}$ at2 = 0 + $\frac{1}{2}$ $\times$ 4 $\times$ 102 = 0.5 $\times$ 4 $\times$ 100 = 200 m. ## Uniform Circular Motion The uniform motion of a body in a circular path is uniform circular motion. When a body is moving in a circular path it changes its direction each time. At all the points in the path it will be tangent to the circle and will undergoing the uniform speed but varying acceleration. Consider a body moving in uniform circular motion. At both points A and B the body moves with the constant speed. ## Uniform Circular Motion Examples We come across many illustration of uniform circular motion in our daily life. Here are some: • A merry going round with constant speed • An athlete in race running in circular path maintaining consistent speed • Artificial satellites moving around the earth. • Earth moving round the sun. ## Uniform Circular Motion Problems Let us go through some solved problems in uniform circular motion: ### Solved Examples Question 1: A object of mass 3 kg is tied to the end of the rope of 5 m length. What will be its speed if it is whirled around uniformly for 2 min? Solution: Given : Mass of object m = 3 kg, Length of rope r = 5m time taken t = 2 min = 120 seconds The distance is given by d = 2 $\pi$ r = 2 $\times$ $\pi$ $\times$ 5m = 31.42 m The speed is given by S = $\frac{d}{t}$ = $\frac{31.42 m}{120 s}$ = 0.261 m/s. Question 2: A car goes round in a circular track covering a distance of 100 m with a speed of 10 m/s. How much time will it take to do so? Solution: Given: Distance d = 100 m, speed s = 10 m/s, The time taken is given by t = $\frac{d}{s}$ = $\frac{100 m}{10 m/s}$ = 10 s.
2018-06-22 15:06:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45246538519859314, "perplexity": 715.2622421661837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864546.30/warc/CC-MAIN-20180622143142-20180622163142-00479.warc.gz"}
http://stackoverflow.com/questions/12476993/moving-a-file-from-local-machine-to-cluster/12477205
# moving a file from local machine to cluster So I know this will sound very simple, because I'm sure it is. I know when you want to move a file you can use cp or mv in a linux terminal to move from one directory to another. My problem is how do you do it when you want to move a file from your local machine to say a cluster. To access the cluster, I ssh into it and I have a directory there. Ive tried the absolute filepaths, but that clearly wouldnt work. - You use the scp command: scp \path\to\your\file.txt user@cluster_address:\path\in\cluster then it asks you for the password, which is the same as the one for ssh. Another option is that you can also mount the directory of the cluster machine using sshfs and then you can normaly do cp and mv in the mounted directory. - Thank youuuu. The nice people at stackoverflow save me yet again! –  pepsimax Sep 18 '12 at 14:35
2015-04-22 04:04:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4029487073421478, "perplexity": 1192.1560303557908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246644526.54/warc/CC-MAIN-20150417045724-00152-ip-10-235-10-82.ec2.internal.warc.gz"}
https://xeechou.net/posts/camera_matrix/
# The MVP Matrix $\text{Model} * \text{View} * \text{Projection}$ is the first lesson to render (the so called render is the process so that people can see it on a 2D screen )objects in Computer Graphics, which transfer a 3D object in object space into, in the end, a UV plane. The Model matrix is simple and easy to understand, simply the translation, scale and rotation, but View matrix and Camera matrix are not obvious(although you can get it for free by single call from glm::lookAt() and glm::perspective()). #How does View Matrix work? The view matrix has another name called extrinsic matrix in Computer Vision, people use it to find the where the camera is. The engines don’t move the ship at all. The ship stays where it is and the engines move the universe around it. This simply means that the view matrix does nothing but remapping everything from $(0,0,0) to the centre of the camera. By linear algebra, it is a linear transform that changes the basis. and one can use the glm::lookAt() generates the view matrix. So in the beginning, the camera sits at$(0,0,0)$, and looking at$(0,0,0)$. The normal is$(0,1,0)$, since we don’t know the direction, lets assume it is$(0,0,-1)$. And imaging the universe is a huge Cube box that surround us. • If we want to move the camera to left by$(-3, 0, 0)$, we can translate the cube by$(3,0,0)$• If we want to rotate the camera to left by 30 degree, we can rotate the cube by 30 degree to the right. So the inefficient implementation is simply just -translation * -rotation, But about the rotation part, there are simple way to do it. Called Gram-Schmidt process. The essence is, again, projection, if we want to retrieve the coordinates from one xyz coordinate system to our new coordinate system, we can simply projecting to that system by dot product to the new axies. The complete View matrix format is: $$M = \begin{bmatrix} R_x & R_y & R_z & 0 \\ U_x & U_y & U_z & 0 \\ D_x & D_y & D_z & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & -T_x \\ 0 & 1 & 0 & -T_y \\ 0 & 0 & 1 & -T_z \\ 0 & 0 & 0 & 1 \end{bmatrix}$$$R$,$U$,$D$is the new coordinate basis, the principle is super simple, simply by first reverse-translate the point and second projecting on the new coordinate system. In shorter form:$M = R | t$. # The persepective projection Persepective projection, on the other hand, is a way to project 3d sences to 2d plane, as the way of human eyes and camera. Which means the object further from us looks smaller than the object closer to us. It sounds nature, but how does the computer implement it? Thats where Camera matrix were introduced. ## Camera matrix To finish projecting objects to our eyes, we need to follow the formula that make futher objects smaller. Given two points$[x_1, y_1, z_1]$and$[x_2, y_2, z_2 ]$, they would project to the same position if$ x_1 / z_1 = x_2 / z_2 $and$ y_1 / z_1 = y_2 / z_2 $. The projection is to project$[x, y, z]$to$ [d\frac{x}{z}, d{y}{z}] $, the$ d \$ there is the camera plane. Since there is now linear tranform to do that with 3d matrix, we have to use homogeneous coordinate. $$\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & -1/d & 0 \end{bmatrix} \begin{bmatrix} x\\ y\\ z\\ 1\\ \end{bmatrix}= \begin{bmatrix} x\\ y\\ z\\ -z/d\\ \end{bmatrix}$$ And as homogeneous coordinates, we should keep scale to keep last element to 1. $$\begin{bmatrix} x\\ y\\ z\\ -z/d \end{bmatrix} \rightarrow \begin{bmatrix} -d\frac{x}{z}\\ -d\frac{y}{z}\\ -d\frac{-d}\\ 1 \end{bmatrix} \rightarrow \begin{bmatrix} -d\frac{x}{z}\\ -d\frac{y}{z}\\ \end{bmatrix}$$ We can simply replace ( 1 ) with ( -d ) in the projection matrix to reach the same goal. $$\begin{bmatrix} -d & 0 & 0 & 0\\ 0 & -d & 0 & 0\\ 0 & 0 & -d & 0\\ 0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} x\\ y\\ z\\ 1\\ \end{bmatrix} \rightarrow \begin{bmatrix} -dx\\ -dy\\ -dz\\ z\ \end{bmatrix} \rightarrow \begin{bmatrix} -d\frac{x}{z}\\ -d\frac{y}{z}\\ \end{bmatrix}$$ Finally, the camera matrix looks like this: $$\begin{bmatrix} -fs_x & 0 & x_c\\ 0 & -fs_y & y_c\\ 0 & 0 & 1 \end{bmatrix}$$ Its little bit complex than what we have, but general idea stays the same.
2022-05-18 23:29:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9222573637962341, "perplexity": 707.143414521757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00637.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=50&t=71236&p=293221
## Q and K Funmi Baruwa Posts: 108 Joined: Wed Sep 30, 2020 9:50 pm ### Q and K Is the only difference between Q and k the fact that Q is measured at any time, and K is only measured when at equilibrium? Silvi_Lybbert_3A Posts: 97 Joined: Wed Sep 30, 2020 9:43 pm ### Re: Q and K Yes Q is the ratio of [P]/[R] when the reaction is not at equilibrium and K is that ratio when the reaction is at equilibrium. They are calculated the same way with the same formula. Side note reminder, when Q<K reaction will proceed to the right (products) and when Q>K reaction will proceed towards left (reactants). Allan Nguyen 2G Posts: 95 Joined: Wed Sep 30, 2020 9:49 pm ### Re: Q and K You can calculate Q to figure out where the reaction will proceed if given the equilibrium constant, K. Similar to what Silvi said, when Q<K the reaction will proceed to the right and vice versa when Q>K the reaction will proceed to the left. Kelly Yun 2I Posts: 103 Joined: Wed Sep 30, 2020 10:03 pm ### Re: Q and K I was confused on this as well, but I try to remember Q as reaction quotient and K as equillibrium constant and that helps me tell the difference! Will Skinner Posts: 84 Joined: Wed Sep 30, 2020 9:28 pm ### Re: Q and K Correct, K is the ratio of products and reactants at equilibrium and Q is the ratio when it is not at the equilibrium. The difference between Q and K tells us which direction the reaction will move. Shrinidhy Srinivas 3L Posts: 108 Joined: Wed Sep 30, 2020 9:39 pm Been upvoted: 2 times ### Re: Q and K As stated above, Q is the reaction quotient at a given period of time (that is not necessarily equilibrium). K is the reaction proportion at equilibrium specifically. When Q < K, the reaction will shift towards the products. When Q > K, the reaction will shift towards the reactants. When Q = K, the reaction is at equilibrium and it will not shift. Hope this helps! Devin Patel 2D Posts: 40 Joined: Tue Nov 17, 2020 12:18 am ### Re: Q and K Yes, Q is the reaction quotient and is essentially the ratio of products/reactants whenever the reaction is not at equilibrium. K is the ratio when the reaction is at equilibrium. The reaction quotient can help us figure out if the reaction is not at equilibrium and can inform us whether the reaction at that moment will favor the forward or reverse reactions. Kyle Dizon 3A Posts: 91 Joined: Mon Oct 05, 2020 12:16 am ### Re: Q and K Q and K are found using the same concept of Product/Reactants. The main difference between the two is that the reaction quotient (Q) is identified when the reaction is not at equilibrium while K is the actual equilibrium constant. Comparing these two will provide us information if the reactant or product is favored in the reaction. Matlynn Giles 2E Posts: 88 Joined: Wed Sep 30, 2020 10:10 pm Been upvoted: 1 time ### Re: Q and K The best way I've found to remember this is Q stands for quotient! reva_bajjuri Posts: 104 Joined: Fri Oct 02, 2020 12:17 am ### Re: Q and K Silvi_Lybbert_3A wrote:Yes Q is the ratio of [P]/[R] when the reaction is not at equilibrium and K is that ratio when the reaction is at equilibrium. They are calculated the same way with the same formula. Side note reminder, when Q<K reaction will proceed to the right (products) and when Q>K reaction will proceed towards left (reactants). I thought q could equal the value of k and that indicates the system is at equilibrium AJForte-2C Posts: 89 Joined: Wed Sep 30, 2020 10:00 pm Been upvoted: 1 time ### Re: Q and K I understand the relationship between Q and K, but can someone tell me why/when we would want to measure the Q/when it would be important? Isaias Gomez D3A Posts: 67 Joined: Wed Sep 30, 2020 9:37 pm Been upvoted: 1 time ### Re: Q and K Quotient starts with Q. Thats what Q is, while K is the constant Hannah Lechtzin 1K Posts: 89 Joined: Wed Sep 30, 2020 9:31 pm ### Re: Q and K Yep! Q and K are found exactly the same way, so K is just Q when a reaction is at equilibrium. Posts: 73 Joined: Wed Sep 30, 2020 9:34 pm ### Re: Q and K That's correct! Q is the reaction quotient, so it happens anytime, but K is only for when the reaction is at equilibrium. Otherwise, you calculate them the same way. Aria Movassaghi 1A Posts: 97 Joined: Wed Sep 30, 2020 9:40 pm ### Re: Q and K yes, k is measured at equilibrium Sable Summerfield Posts: 30 Joined: Tue Nov 17, 2020 12:18 am ### Re: Q and K Allan Nguyen 2G wrote:You can calculate Q to figure out where the reaction will proceed if given the equilibrium constant, K. Similar to what Silvi said, when Q<K the reaction will proceed to the right and vice versa when Q>K the reaction will proceed to the left. Could we go more in depth as to WHY when Q<K the reaction will proceed to the right and when Q>K the reaction will proceed to the left? If the Q value is P/R when the reaction is NOT at equilibrium and K is P/R when it is at equilibrium, why does this mean that when Q>K the reaction will shift to the left? Posts: 87 Joined: Wed Sep 30, 2020 9:40 pm ### Re: Q and K Yep! It's in the name itself. Q is the reaction quotient and K is the equilibrium constant. AHUNT_1A Posts: 110 Joined: Wed Sep 30, 2020 9:41 pm ### Re: Q and K Using this as a reference Posts: 84 Joined: Wed Sep 30, 2020 9:49 pm ### Re: Q and K Yeah Q tells us the ratio of P/R concentrations at any time while K is the P/R concentrations at equilibrium. However, even though they may be calculated the same way knowing Q and K can tell us which direction the reaction tends to. austin-3b Posts: 59 Joined: Wed Nov 11, 2020 12:18 am ### Re: Q and K Yes, Q is any time. You would measure Q at that moment to determine the direction of the reaction. If Q<K, the reaction is going right; more products are made If Q>K, the reaction is going left; more reactants are made Tiao Tan 3C Posts: 100 Joined: Wed Sep 30, 2020 9:59 pm ### Re: Q and K Yes you're correct. Q is measured at any instant of the reaction but K is only measured after the reaction reaches equilibrium. Jaclyn Dang 3B Posts: 105 Joined: Wed Sep 30, 2020 10:02 pm ### Re: Q and K Q is at any time and they are calculated the same exact way. You would measure Q at that moment to determine the direction of the reaction. if Q=K then the reaction is at equilibrium If Q<K, the reaction is going right and it favors products If Q>K, the reaction is going left and it favors reactants Lung Sheng Liang 3J Posts: 85 Joined: Wed Sep 30, 2020 9:33 pm ### Re: Q and K Yes, Q is used when the chemical equation is not at equilibrium Leyla Anwar 3B Posts: 85 Joined: Wed Sep 30, 2020 10:03 pm ### Re: Q and K austin-3b wrote:Yes, Q is any time. You would measure Q at that moment to determine the direction of the reaction. If Q<K, the reaction is going right; more products are made If Q>K, the reaction is going left; more reactants are made Does this mean there can be multiple Q values for different times during the reaction? David Y Posts: 90 Joined: Wed Sep 30, 2020 9:49 pm ### Re: Q and K Yes, K is the equilibrium constant because it represents the ratio of products and reactants at equilibrium. jasmineculilap_3F Posts: 85 Joined: Wed Sep 30, 2020 9:40 pm ### Re: Q and K AJForte-2C wrote:I understand the relationship between Q and K, but can someone tell me why/when we would want to measure the Q/when it would be important? You could measure Q in order to figure out what direction the reaction is going towards. When Q<K, the reaction favors the forward reaction/products and if Q>K, the reaction is favors reactants (reverse reaction). Daniela Santana 2L Posts: 86 Joined: Wed Sep 30, 2020 9:59 pm ### Re: Q and K Hi! Yes you are right about Q and K. Q is the reaction quotient and you can calculate this at any time. K is the equilibrium constant and you can only calculate this at equilibrium. Gwen Casillan 3E Posts: 46 Joined: Sat Sep 07, 2019 12:17 am ### Re: Q and K Yes, Q is the reaction quotient, and K is the equilibrium constant. Q indicates that there is a shift, while K indicates equilibrium. We use Q and compare it to K to see whether the reaction is at equilibrium (Q=K), favors reactants (Q>K), or favors products (Q<K). SLai_1I Posts: 93 Joined: Wed Sep 30, 2020 9:52 pm ### Re: Q and K Yes, K can only be measured when the reactant is at equilibrium. Q, on the other hand, can be measured at any time of the reaction to determine the direction the reaction will continue. Nick Saeedi 1I Posts: 102 Joined: Wed Sep 30, 2020 9:39 pm ### Re: Q and K Yes Q is a constant for the reaction at a time not at equilibrium which is compared to k in order to see which direction the reaction is shifting towards. Neel Sharma 3F Posts: 91 Joined: Wed Sep 30, 2020 9:32 pm Been upvoted: 1 time ### Re: Q and K Yes. A good way to think of it is that K is simply the name of Q when the system is at equilibrium. Everywhere else it is just the reaction quotient, Q. Hope this helps! Hannah Lechtzin 1K Posts: 89 Joined: Wed Sep 30, 2020 9:31 pm ### Re: Q and K Yep! They are calculated the same way, K just denotes that the reaction is at equilibrium. 305572629 Posts: 86 Joined: Wed Sep 30, 2020 9:41 pm ### Re: Q and K Q is the reaction quotient and K is the equillibrium constant. If Q>K the reaction will shift left, and if Q<K the reaction will shift right. DominicMalilay 1F Posts: 99 Joined: Wed Sep 30, 2020 9:36 pm ### Re: Q and K Being more general, Q is the more specific term compared to K which specifies a specific instance in the rxn! joshtully Posts: 87 Joined: Wed Sep 30, 2020 9:36 pm ### Re: Q and K K is just Q at equilibrium. Michael Cardenas 3B Posts: 48 Joined: Wed Sep 30, 2020 9:34 pm ### Re: Q and K Yeah K and Q are the same ratio but K is at equilibrium concentrations while Q is at any point of time during the reaction. Kelly Ha 1K Posts: 90 Joined: Wed Sep 30, 2020 9:52 pm Been upvoted: 1 time ### Re: Q and K Although calculated the same way, Q is the reaction quotient while K is the equilibrium constant. K is basically just Q at a specific point (at equilibrium). Posts: 88 Joined: Wed Sep 30, 2020 9:38 pm ### Re: Q and K Q is the reaction quotient, which is essentially the ratio of products to reactants of a reaction that is not at equilibrium. K, on the other hand, is the equilibrium constant, which is the ratio of products to reactants at equilibrium. You can relate Q to K to find out which direction the reaction will begin to favor. Posts: 87 Joined: Wed Sep 30, 2020 9:40 pm ### Re: Q and K Yes, that's correct! When Q and K are different, we can tell that the concentrations are not at equilibrium. Xinying Wang_3C Posts: 94 Joined: Wed Sep 30, 2020 9:39 pm ### Re: Q and K Basically yes, Q is the equilibrium for a reaction measured at any time, and k should be a constant number for the same equation conducted under the same temperature. Laura 3l Posts: 103 Joined: Wed Sep 30, 2020 9:55 pm Been upvoted: 1 time ### Re: Q and K K is calculated when the reaction is at equilibrium at a certain temperature, while Q can be calculated at anytime when the reaction is not at equilibrium at that same temperature . By solving for Q and comparing it to K you can determine which way the reaction needs to shift, if there is more reactants that still need to form product (shift right) or vice versa. Posts: 150 Joined: Wed Sep 30, 2020 10:01 pm Been upvoted: 2 times ### Re: Q and K One way to compare Q and K is that if Q>K then you have more products in the reaction that the amount of products there should be at equilibrium. Therefore, to reach equilibrium, the reaction will shift left. If Q <K then there are more reactants present than the amount of reactants that there should be at equilibrium, so the reaction will shift right to produce more products and balance. Cecilia Cisneros 1F Posts: 133 Joined: Wed Sep 30, 2020 9:45 pm ### Re: Q and K Essentially, yes. Q is measured at any point in the reaction that is not at equilibrium. However, K must be measured only at equilibrium. John_Tran_3J Posts: 91 Joined: Wed Sep 30, 2020 9:58 pm ### Re: Q and K Q, the reaction quotient is the concentration at ANY TIME during the reaction. K is when the equilibrium is set between the products and reactants. Sejal Parsi 3K Posts: 76 Joined: Wed Sep 30, 2020 10:03 pm ### Re: Q and K Yes, Q is the reaction quotient and can be found at any time, but K can only be found at equilibrium. emwoodc Posts: 81 Joined: Fri Aug 09, 2019 12:16 am ### Re: Q and K You are correct! K is found at equilibrium and Q is found at any time of the reaction! Carly_Lipschitz_3H Posts: 93 Joined: Wed Sep 30, 2020 9:56 pm ### Re: Q and K Yes, Q can be measured at any time. K is the equilibrium constant, so it stays constant. Q can be greater than, equal to, or less than K and helps you determine if the forward or reverse reaction is favored.
2021-02-28 01:45:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010411858558655, "perplexity": 2511.4176199222734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359624.36/warc/CC-MAIN-20210227234501-20210228024501-00353.warc.gz"}
http://www.physicsforums.com/showthread.php?t=391475
# A hideous Linear Regression/confidence set question by Phillips101 Tags: hideous, linear HW Helper P: 1,361 Notice that $$\frac{\hat{\beta}' X'X \hat{\beta}}{\sigma^2}$$ has a $$\Chi^2$$ distribution. however, the variance is unknown, so you need to estimate it (with another expression from the regression). What would you use for the estimate, and what is its distribution?
2014-07-26 07:12:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029864430427551, "perplexity": 1175.9919744498509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997895170.20/warc/CC-MAIN-20140722025815-00007-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.greencarcongress.com/2017/10/20171002-tour.html
## Rice University team finds asphalt-lithium metal anode enables faster charging, resistance to dendrite formation; Li-S test cell ##### 02 October 2017 Rice University team finds asphalt-lithium metal anode enables faster charging, resistance to dendrite formation; Li-S test cell The Rice lab of chemist James Tour has developed anodes comprising porous carbon made from asphalt that showed exceptional stability after more than 500 charge-discharge cycles. A high-current density of 20 milliamps per square centimeter demonstrated the material’s promise for use in rapid charge and discharge devices that require high-power density. The finding is reported in the journal ACS Nano. In addition, the researchers found that the new anode prevented the formation of lithium dendrites. These mossy deposits invade a battery’s electrolyte. If they extend far enough, they short-circuit the anode and cathode and can cause the battery to fail, catch fire or explode. The Tour lab previously used a derivative of asphalt—specifically, untreated gilsonite, the same type used for the battery—to capture greenhouse gases from natural gas. This time, the researchers mixed asphalt (Asp) with conductive graphene nanoribbons and coated the composite with lithium metal through electrochemical deposition. The ultrahigh surface area of >3000 m2/g (by BET, N2) of the porous carbon ensures that Li was deposited on the surface of the Asp particles, as determined by scanning electron microscopy (SEM), to form Asp-Li. Graphene nanoribbons (GNRs) were added to enhance the conductivity of the host material at high current densities, to produce Asp-GNR-Li. Asp-GNR-Li has demonstrated remarkable rate performance from 5 A/gLi (1.3C) to 40 A/gLi (10.4C) with coulombic efficiencies >96%. Stable cycling was achieved for more than 500 cycles at 5 A/gLi, and the areal capacity reached up to 9.4 mAh/cm2 at a highest discharging/charging rate of 20 mA/cm2 that was 10× faster than typical LIBs, suggesting use in ultrafast charging systems. —Wang et al. The lab combined the anode with a sulfurized-carbon cathode to make full batteries for testing. The batteries showed a high-power density of 1,322 W/kg and high-energy density of 943 Wh/kg. The capacity of these batteries is enormous, but what is equally remarkable is that we can bring them from zero charge to full charge in five minutes, rather than the typical two hours or more needed with other batteries. —James Tour An earlier project by the lab found that an anode of graphene and carbon nanotubes also prevented the formation of dendrites. Tour said the new composite is simpler. Schematic illustration of the typical lithium dendrites (left) vs the lithium-coated high surface area porous carbon from asphalt (right). The team found that when Asp-GNR is present, its conductivity and high surface area allows Li to be coated on its surface resulting in a smooth surface of Li metal that gives a lower overpotential for both lithiation and .Credit: ACS, Wang et al. Click to enlarge. While the capacity between the former and this new battery is similar, approaching the theoretical limit of lithium metal, the new asphalt-derived carbon can take up more lithium metal per unit area, and it is much simpler and cheaper to make. There is no chemical vapor deposition step, no e-beam deposition step and no need to grow nanotubes from graphene, so manufacturing is greatly simplified. —James Tour Rice graduate student Tuo Wang is lead author of the paper. Co-authors are Rice postdoctoral researcher Rodrigo Villegas Salvatierra, former postdoctoral researcher Almaz Jalilov, now an assistant professor at King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia, and former Rice research scientist Jian Tian, now a professor at Wuhan University, China. Tour is the T.T. and W.F. Chao Chair in Chemistry as well as a professor of computer science and of materials science and nanoengineering at Rice. The Air Force Office of Scientific Research, EMD-Merck and Prince Energy supported the research. Resources • Tuo Wang, Rodrigo Villegas Salvatierra, Almaz S. Jalilov, Jian Tian, and James M. Tour (2017) “Ultrafast Charging High Capacity Asphalt-Lithium Metal Batteries” ACS Nano doi: 10.1021/acsnano.7b05874 Is this the vastly superior battery that Harvey keeps mentioning? This may have the potential to become the first very high capacity (5X +) very quick charge (5 minutes), lower cost affordable batteries for extended range BEVs and fixed applications to store excess/surplus REs and regulate grids etc. Let's mass produce ASAP. I would buy a 2nd/3rd generation extended range (500/miles) BEV with a 150+ kWh battery pack as soon as ultra quick (5 minutes) charging public utilities are available! If this one were to pan out (moving from the lab to practical real world considering cost/manufacturing, safety, yada, yada, yada) then it would truly be a game changer. I believe that once you get over ~250miles of range, the ability to quickly charge becomes more important than trying to further improve the range. Obviously, both are good, but if you can top it off in 5 minutes...then problem solved: You buy however much capacity YOU need and nothing more. Harvey can buy his 500 miles and I'll be happy with my 200 :) Just to reiterate the most incredible fact in the article, the whole battery demonstrated 943 Wh/kg. That's more than 3x what the 2170s currently coming out of Tesla's Gigafactory can manage. Proof that if we can crack the LiS problems, we're looking at one third the weight of current equivalent batteries or three times the range for much the same cost. Lo veo demasiado maravilloso para ser cierto una batería de estas daría una autonomía equiparable a un buen diesel y encima con altas tasas de carga y descarga. Este no es un articulo cualquiera tenemos aquí al santo grial de las baterías ni mas ni menos pero.......Veo la ausencia de un gran detalle no hablan de tiempos ni de si tan siquiera tiene alguna posibilidad de ser comercializado aunque sea a alto precio. Interesting, so an EV running off a battery containing asphalt would be running on petroleum? A BEV, limited to 200 miles range, is a no go in our adversed weather area. A good condition 500 miles range could give about 350 miles in bad weather. The same reduction (about 33%) would apply to 200 range units. Harvey, That is WHY I TOLD YOU THAT YOU COULD HAVE YOUR CHOICE AND I CAN HAVE MINE. Are you physically capable of recognizing that not everyone has the same requirements as you? Just for a minute, can you fake it? LOL I have the same requirements as Harvey ;) Though I would be okay with 30 min charging, especially for something that would have that many kwh. Hopefully this battery turns out... it seems capable so far, and not a bunch of exotic materials either. so this gets me thinking... if EVs are the new norm lets say in 20-40 years... wouldn't it be prudent now to build out the infrastructure to handle fast charging to nearly every (new) home? I mean if you have a 150+kwh pack, it could take days on single phase. a pickup might need 300kwh or more, so we better be planning for that future now, rather than making these half measures... I guess that's what frustrates me about this EV future. Its better to plan and overprovision, then to have to rip all of these current "fast chargers" from the ground to place ones that are 10-20x more powerful. We are spending gobs of money on technology that might last for 5 years before its outdated. I use this example a lot, but a relevant comparison would be the short amount of time we went from SD to SDHC to SDXC... where as other formats didn't have these compatibility issues. We as a global community need to come up with a solution for plugs on our EVs, we need to guess and say that we will charge at ex. (arbitrary numbers) 100amps at 480v or something like that, but allow up to 150 amps at 1200v or something like that, just build out for the near impossible, that way if its a EV semi, or an EV passenger car we can manage just the same... and we could modulate the power down or up based on the need/vehicle. I get your exasperation, DaveD. Harvey loves to spec out cars just over the horizon, available someday, just not today. He needs 500 mile range with 5 minute recharge because after driving for 7 hours, he wants to get on the road and drive another 7 hours. That's his requirement, and he's stickin' to it. Until some automaker hits that spec for $30k and then by gum, he's gonna need 10 hours of driving and a two minute recharge time. I'll grant that Harvey does have some special requirements to meet, like not freezing to death if you get stuck on the side of the road for 30 minutes. Or the charge coupler not freezing in the socket if you leave your car plugged in for longer than it takes a cup of steaming hot coffee to turn into iced coffee while you're still drinking it. I once almost froze to death just crossing the street in Toronto in December. Those are hardy people up there I'll tell ya. My hosts laughed at my flimsy Southern California ski jacket and lent me a proper coat so I didn't take a place among the marble statuary. I hear you Harvey. It gets cold up there in the great white north. You need some headroom on your batteries. Somebody will deliver it someday, and maybe the "About" page will come with credits (and royalties no doubt) to Rice University. Current standard allows 800V (maybe more?), first stations are already build in Europe. With cheaper and more powerful batteries every house will also get batteries as a buffer, so when you charge the car it will be quick and then the buffer battery will charge at a lot slower rate from the grid. You will also benefit a cheaper electricity when "unpredictable" renewable sources will take larger part of electricity mix. Also take note that even you have 150 kWh you must charge over night only what you have used during that day and typical daily drives of average person are really very short. For longer trips we will have to rely on fast charging stations. @Cheese Well 150kWh charge taking days is a bit of an exaggeration. It is actually 22 hours at 7kW (single phase). Most of the home storage solutions currently offered are in the range of 3-15kwh per module, and come with a steep premium. The home uses a lot less electricity than a vehicle would in an instance, you wouldn't get much charge unless you had a closer parity between your car battery and your house battery. Harvey and several others are looking for a vehicle that would be practical to replace their ICE before they jump into EV. I can't own both at the same time. I drive a lot, cross country trips for work and leisure, a nice real 500mile range and 30-45min recharge would be acceptable to me. I also fill my gas up before it says zero miles till empty, i don't think i would like to deep discharge a car battery to get that range. 40-80% charge is where lithium likes to be. Lead acid likes 70-100% charge. I'm in the market for a huge suv, to tow a trailer just over 5000lbs (also looking at one that weighs just under 3000). I'd like a pretty hardy battery, if i were to get an EV one. I think if i were to get an electric version it would have to be hybrid. Harvey may seem irrational to you guys, but he represents a large demographic. He wants EVs to have parity with ICEs. A seemless replacement if you will. People spend a lot of money on gasoline vehicles right now just to have that freedom. You basically have deliver a product that people want, or change thier minds to make them want it. Went from anodes to towing in 6 posts, good focus. Currently, TESLA is making first generation (300 miles) extended range BEVs with low performance, heavy, costly batteries. A Model S100 at$100K is a move towards the objective. Second generation extended range BEVs, with near future 2X to 3X batteries, will weight less and go up to 500 miles with 150 kWh to 160 kWh. At the current battery technology development rate, those BEVs should be available by 2025 or so. Heavy weight tow e-trucks or large e-pick-ups used to tow 5000 lbs, may need up to 200 kWh or have to recharge more often, every 200/250 miles or so? No recommended for long trips? A small FC to extend range could be a good fix? Ultra quick charging facilities will follow. Total capacity will surprise many posters. Asia (mainly China-Japan and So-Korea) will drive the race to electrified vehicles and improved batteries/FC mass production early into the next decade. Of course, ultra quick charge/refill facilities for BEVs and FCEVs will follow. The cost will be about the same as the total cost of Hurricanes and Tropical storms for a single year or two. Some of the private residence e-energy requirements could be filled with near future higher performance/lower cost solar panels-batteries and/or FCs? Larger installations could supply enough energy to feed home BEVs/FCEVs? The comments to this entry are closed.
2022-11-26 16:57:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3064255714416504, "perplexity": 5307.06355665953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00554.warc.gz"}
http://math.stackexchange.com/questions/263136/definition-of-topology
# Definition of topology I am learning the topology from the book by Munkres. Munkres starts up the topic by describing the way topology was defined. It says that whensoever we define anything in mathematics we define it in such a way that it covers some interesting aspects of mathematics that can be studied under that object being defined and at the same time it should be restricted from being over general. Can anyone shed some light on the way the definition of topology was formulated along the lines aforementioned? May I know the difference between point set topology and general topology? - Point set topology and general topology are just two names for the same thing; there is no real difference. – Brian M. Scott Dec 21 '12 at 10:03 Yes. Point-set topology is general topology. To be distinguished from algebraic topology, differential topology, etc. – Hui Yu Dec 21 '12 at 15:19 @HuiYu Thank you – danny gotze Dec 22 '12 at 9:48 @BrianM.Scott Sir, will it be right to say that we investigate the same things in topology as in analysis but in a quasi quantitative way of open sets from which we derive general properties of metric spaces? – danny gotze Dec 22 '12 at 9:49 Not really: metric spaces are just a small part of topology. It would be better to say that general topology deals with concepts that have their roots in metric spaces but that generalize those roots enormously, mostly in directions that move away $-$ sometimes very far away $-$ from the quantitative aspects of metric spaces. – Brian M. Scott Dec 22 '12 at 9:55 An example - think about the definition of an equi-continuous function. The $\delta-\epsilon$ definition is annoying. When you define it in terms of small open sets, rather then epsilons and deltas, the outcome is a beautiful and revealing definition which actually tells you something intuitive about the function.
2016-05-27 09:07:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8401550650596619, "perplexity": 294.88432139463805}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276564.72/warc/CC-MAIN-20160524002116-00183-ip-10-185-217-139.ec2.internal.warc.gz"}
http://www.shirpeled.com/2018/02/
## Wednesday, February 21, 2018 ### Hierarchical Hierarchical Clustering: A Concept So Nice, We Named It Twice While working at Resonai, I wrote a piece of code that performs Hierarchical Clustering, in collaboration with David Lehavi. In addition to various optimizations I won't get into, we applied a nice heuristic that allowed a considerable improvement in the program's memory footprint, as well as the running time. The more formal name we gave it was Spatially Sensitive Hierarchical Clustering (SSHC), but ended up referring to it as Hierarchical Hierarchical Clustering which is funnier, and better reflects what's really going on. ### Hierarchical Clustering in a Nutshell The following image, taken from the Wikipedia article on HC, illustrates the basic notion rather well: Suppose we have 6 items that we wish to cluster, and we have some metric that we can compute between subsets of those items. We get a hierarchy of clusters via the following greedy algorithm: 1. Let each item be a cluster (with a single element) 2. Let $S$ be the set of all clusters 3. While $|S| > 1$: 1. Find the closest pair of clusters $X, Y$ in $S$. 2. Define a new cluster $Z := X \cup Y$ 3. Add $Z$ to $S$, remove $X$ and $Y$ from $S$ So the diagram above implies the following possible sequence of cluster unions (it may be slightly different): 1. $\{b\} \cup \{c\}$ 2. $\{d\} \cup \{e\}$ 3. $\{d,e\} \cup \{f\}$ 4. $\{b,c\} \cup \{d,e,f\}$ 5. $\{a\} \cup \{b,c,d,e,f\}$ There are a number of obvious optimizations, for example, one does not have to recompute all distances after each creation of a new cluster. Rather - only the distances that involve the new cluster. ### What is It Good For? In most of the use cases that we employed, the initial atoms were triangular facets of a 3D mesh. One such use case is mesh segmentation, that is the process of breaking down a 3D object into meaningful sub-parts. There's a well-known paper from 2006 by Attene et al that describes such an approach. The metric chosen for distance between segments(=clusters) is how much they resemble certain primitive shapes  the authors chose in advance (cylinder, cube, sphere, etc.). As can be seen in this image taken from the paper, once one has the full hierarchy of segments, this tree can be trimmed according to the number of clusters one wishes. source: "Hierarchical mesh segmentation based on fitting primitives" by Attene et al ### The Problem With HC The natural approach we took was to consider this not as atoms where all pairwise distances needed to be considered, but as a graph, where only distances between neighboring vertices had to be considered. In the 3D mesh case - adjacent faces were represented by neighboring atoms in the HC tree. So now we simply put all the edges of the graph (and the respective distances) into a some implementation of a min-heap, and began the simple process of: • Extracting the minimal edge • Uniting the clusters it connected • Updating the graph (and the edges in the heap) accordingly This became an issue when the number of items stored in the heap was in the millions and tens of millions, which is very much the case when you get a high-quality 3D-scan of a room, for example. It turned out that operations on the heap became unbearably expensive, and the memory footprint was terrible, since we had to store this huge heap in the RAM. ### The Solution: HHC We realized that changes in the HC tree were almost always very local - one computes the distance between pairs of clusters, which are neighboring nodes in the graph, and sometimes unites them in a way which affects only the neighbors of the united clusters. So why not divide and conquer? What we ended up doing is this: • Break down the model very crudely into K parts, by doing something that is not far from just putting it on a grid and taking cubes. Choose K such that each part will have not so many facets in it, but not too little. • Run HC on each part separately until the number of clusters in the part becomes small. • Now unite adjacent parts until the number of clusters in each part is again not too big but not too small. Note that this is in effect doing a Hierarchical Clustering of the parts, hence the winning name. Also note that it effectively means you work on a number of very small heaps all the time and never on a very large one. This means heap operations are now considerably cheaper, and indeed the memory footprint went down by a factor of 5 or so, and the running time was improved dramatically, on machines with slower hard drives (since less swapping was involved). ### The Price: Accuracy As it often happens, the price of heuristics is that they mess up the algorithm's accuracy. In our case - it means that if the minimal edge happens to connect atoms that lie in different parts - you will get to it much later than you otherwise would have, but the reduction in running time and resource consumption made it worth our while.
2021-01-21 04:35:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7032979726791382, "perplexity": 731.0564869564927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00591.warc.gz"}
https://math.stackexchange.com/questions/3323570/ramsey-growth-model-phase-diagram-qualitative-analysis-vector-forces
# Ramsey growth model: phase diagram, qualitative analysis, vector forces I'm missing something here, I don't know how to derive the vector forces for this system of differential equations show in the picture. It is explained in the picture, but I don't get the parts that I marked in orange (... which implies that k < k* ...). Also, how do you know c_dot = 0 is a vertical line and not a horizontal one. How do you know are A2 is where c_dot > 0? Where is the unstable arm? What they say at the bottom of the page that K=capital <-> k = K/L in the steady state is that is decreasing at the same rate as the labor force (=n), I guess? Is it possible to linearize these differential equations and derive the solutions analytically assuming log utility (e.g. as a percentage deviation from the steady state)? How would you start this? Thanks!
2019-08-26 02:50:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8322686553001404, "perplexity": 524.4264658915621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00048.warc.gz"}
http://physics.stackexchange.com/tags/chaos-theory/new
# Tag Info So I figured it out--wasn't too hard. First, we need to decide what it means for the mass to have "flipped over" The point where the mass flips is the point where the absolute angle it makes with the normal is bigger than 180 degrees: So when all we need to do is find the first time when $|\theta_{2}|>\pi$. That can be done with any programming ...
2014-12-28 20:07:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6670730113983154, "perplexity": 197.23431125637237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447559592.38/warc/CC-MAIN-20141224185919-00036-ip-10-231-17-201.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/235399/curve-fitting-data-using-polynomial/235402
Curve Fitting Data using Polynomial I have this data set in Mathematica: data = {{1980, 4716.71636}, {1981, 4530.36984}, {1982, 4301.97069}, {1983, 4335.91656}, {1984, 4468.26205}, {1985, 4484.33818}, {1986, 4487.85587}, {1987, 4680.83405}, {1988, 4885.5905}, {1989, 4948.02116}, {1990, 5121.17944}, {1991, 5071.56391}, {1992, 5174.6706}, {1993, 5281.38661}, {1994, 5375.0338}, {1995, 5436.69799}, {1996, 5625.04188}, {1997, 5701.92092}, {1998, 5749.89306}, {1999, 5829.51995}, {2000, 5997.29891}, {2001, 5899.85548}, {2002, 5942.42141}, {2003, 5991.19093}, {2004, 6105.44411}, {2005, 6130.55242}, {2006, 6050.3846}, {2007, 6127.88822}, {2008, 5928.25633},{2009,5493.54791}, {2010, 5700.10834}, {2011, 5572.58478}, {2012, 5371.77717}, {2013, 5522.90837}, {2014, 5572.10631}, {2015, 5422.96568}, {2016, 5306.66246}, {2017, 5270.74853}, {2018, 5416.27788}} But I do not know how to code to curve fit it in the program. I have read multiple tutorials and watched videos on curve fitting polynomial data, but it is either about creating the dataset from an already established equation, or the instructions just flat out confused me. I have it curve fitted on Excel with a nasty equation that starts with an $$x^6$$ and numbers with awful decimals. The goal is to get a cleaner equation; how can I do this? I have the data graphed as such: • A Chebyshev basis gives better coefficients. – Michael E2 Nov 26 '20 at 23:09 The goal is to get a cleaner equation how can I do this? You can use (the experimental) FindFormula. Here is an example: dsFormulas = FindFormula[N@data, x, 5, All, SpecificityGoal -> 1, RandomSeeding -> 23]; dsFormulas = dsFormulas[SortBy[#Complexity &]] Select a "cleaner" formula: formula1 = Keys[Normal[dsFormulas]][[1]] (* 5325.93 + 731.662 Cos[19. x] *) Plot the formula together with the data: ListPlot[{data, {#, formula1 /. x -> #} & /@ data[[All, 1]]}, Joined -> {False, True}, PlotLegends -> {"Data", "Found formula"}, AspectRatio -> 1/3, PlotTheme -> "Detailed", ImageSize -> Large] EDIT: If you just want a cleaner function, then stick with the excellent answers from @AntonAntonov and @MichaelE2. As they have shown, curve fitting can be done quite easily for your data in Mathematica, but it's my opinion that it's either the wrong tool for the job, or at least the results are more easily misinterpreted (unless you have a compelling reason to use a function that you haven't expressed here). Also, I stole the plot styling from @AntonAntonov's answer and forgot to give them credit. ORIGINAL: I would argue that data smoothing would be better here than curve fitting. Generally, the point of curve fitting is to either extract fitting parameters or to be able to extrapolate (a little ways) past the edge of the data. To do that, you need to have the model (or a small set of candidate models) first. By modelling with a polynomial of any kind, you're essentially predicting that the emissions will go to $$\pm \infty$$ at some point in the future and were $$\pm \infty$$ in the past. By using a cosine model, you're predicting a constant oscillation in the emissions. Either way, the fitting constants/equations are almost certainly meaningless and there is probably no predictive power in choosing a random model. I would strongly caution against using any fitting constants returned for any of these curves unless you have other reasons to believe the model is correct. With data smoothing, you're simply smoothing out sudden jumps in the data to allow the eye to more clearly follow a long-term trend. You can also do things like filter the data to remove long-period oscillations instead of short ones if you know that they are caused by some outside force. There are various kinds of filters already available in Mathematica, though you can of course implement any kind of filter you like). I chose a Gaussian filter with a radius of 3 here (so that data over a 7 year period is considered), though other common options might be LowpassFilter, MovingAverage, or MeanFilter. ListPlot[{ data, {data[[All, 1]], GaussianFilter[data[[All, 2]], 3]}\[Transpose] }, AspectRatio -> 1/3, Joined -> {False, True}, PlotLegends -> {"Data", "7-Year Trend"}, PlotTheme -> "Detailed" ] • @AntonAntonov. I disagree. It is common for posters to pose x/y questions (for those unfamiliar, requesting help on "x" when the actual problem is "y"). This post is a plausible candidate. Inferring and responding to the "y" is not considered off-topic. – Daniel Lichtblau Nov 27 '20 at 15:43 • @DanielLichtblau I agree with your big picture perspective. My previous comment also comes from a "newcomer to WL" perspective. 1) A Newcomer wants to get a simple, concise expression for some data using WL. 2) An answer tells newcomer to use filtering, because WL has lots of filtering capabilities, and because of some methodological reasons. 3) Newcomer thinks WL cannot do fitting to data in a simple, automatic way and that is why the Newcomer is given a methodology answer that provides alternatives that are not of interest. 4) That impression of WL is wrong. – Anton Antonov Nov 27 '20 at 16:30 • @DanielLichtblau I realize that I both exaggerate and oversimplify in my previous comment. I do think this answer is good to have, but only in conjunction with an answer that provides an easy way to find a fitting expression. (Which was requested by OP.) – Anton Antonov Nov 27 '20 at 16:35 • @AntonAntonov I am certainly biased about this topic because I think that one of the best things about this forum is that challenges or alternatives to the OP's stated objectives can obtained from subject matter experts. In other words, it's not just about doing just what the OP asks for. I think this is especially true for statistics problems where the objective and potential consequences are at least as important as getting some function to execute properly. – JimB Nov 27 '20 at 17:05 • @AntonAntonov I agree that there should also be a direct answer to the question asked. I made my post after your answer had already been accepted which is why I didn't provide one of my own, but I should have referenced the other answers and I've added some text to try to address that issue. If it doesn't properly clear up the issues you raised, feel free to edit it directly or else suggest some fixes. – MassDefect Nov 27 '20 at 19:46 The choice of basis makes a difference. You don't need a Chebyshev basis (see my comment), but a power basis $$\{\,(x-c)^k\,\}_{k=0}^n$$ centered at a number $$c$$ in the domain of the data will behave better than one centered far outside it, such as the standard power basis $$\{\,x^k\,\}_{k=0}^n$$. That's why the OP's polynomial looks so horrible. Two alternatives are to center at beginning or in middle of the data: domain = MinMax@data[[All,1]]; basis = (x - First@domain)^Range[0, 6]; basis = (x - Mean@domain)^Range[0, 6]; Here is the result using the first basis: basis = (x - First@domain)^Range[0, 6]; lmf = LinearModelFit[N@data, basis, {x}]; lmf[x] Plot[ lmf[x], {x, Min@domain, Max@domain}, Prolog -> {Point@data}, Frame -> True, GridLines -> Automatic ] (* 4663.63 - 208.806 (-1980 + x) + 50.4679 (-1980 + x)^2 - 3.9543 (-1980 + x)^3 + 0.167273 (-1980 + x)^4 - 0.00378372 (-1980 + x)^5 + 0.0000344867 (-1980 + x)^6 *) A nicer presentation of the polynomial may be achieved in terms of the increment from the beginning of the data: lmf[First@domain + Δx] (* 4663.63 - 208.806 Δx + 50.4679 Δx^2 - 3.9543 Δx^3 + 0.167273 Δx^4 - 0.00378372 Δx^5 + 0.0000344867 Δx^6 *) One purpose in fitting is to estimate parameters of a model of a phenomenon from experimental data. That doesn't seem to have any applicability here, which may be what prompted @MassDefect to suggest an alternative approach. By the way, I think the OP's original fit was done correctly. • Thanks, @Anton. – Michael E2 Nov 27 '20 at 17:24 • Why is the cantered polynomial better? Is it just because it avoids numerical error due to large powers of the abscissae? Also, makes the coefficients have values of order 1 (need to think what the numerical range would be). – Hugh Nov 27 '20 at 23:06 • @Hugh Yes, it's a numerical issue and it also makes the coefficients smaller. Any polynomial basis yields the same polynomial function if computed exactly. If the terms of the function are large and the value low, then there must be corresponding round-off error. The degree-6 term in OP's polynomial and mine have the same coefficient. At x = Last@domain, they are 2 * 10^15 and 10^5 resp. (due to x^6 vs. Δx^6). Something has to cancel out for the function value to be around 5500. If you plot the OP's polynomial (change First@domain to 0), you'll see the round-off error is ~0.2%. – Michael E2 Nov 28 '20 at 0:28 • Cool: The 0.2% is almost exactly the result of this: basis = (x - 0)^Range[0, 6]; lmf = LinearModelFit[N@data, basis, {x}]; Max@Abs[List @@ lmf[x] /. x -> Last@domain] \$MachineEpsilon/5000 – Michael E2 Nov 28 '20 at 0:29
2021-01-24 09:54:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4917476773262024, "perplexity": 1919.094483074025}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547475.44/warc/CC-MAIN-20210124075754-20210124105754-00768.warc.gz"}
https://math.stackexchange.com/questions/3074602/initial-guessing-to-bvp4c-matlab
Initial guessing to bvp4c MATLAB I am working on a 4th order non-linear variable coefficient homogeneous ODE bvp. I am having issues getting a solution using bvp4c. This could be one of many things. Not having a solution within the boundaries I am providing, due to the non-linearity matlab cannot resolve to a solution, or the initial guesses for the functions are so far off it can't work with them. The error I am getting is Error using bvp4c (line 251) Unable to solve the collocation equations -- a singular Jacobian encountered. I was wondering if there is a better way to provide initial guesses. I would like to give a vector of points that the solution should be near. Is that possible? Any other advice? btw, I am unable to post example an example code. • Check the boundary conditions, I frequently got errors because some index error there making a solution impossible. If you have any physical insight into the problem, you could guess a good initial point and use the IVP solution as initialization. – LutzL Jan 15 at 16:25 • at this link mathworks.com/help/matlab/ref/bvpinit.html it says for multiple point bvp, you can specify the point in [a,b] at which the boundary conditions apply, other than the endpoints a and b. Any idea on how to do this.?For my problem I have a y(a) = h, y'(a) = 0, y(b)=1, and y'(b) = 0. – tenichols Jan 15 at 21:21 • You can also impose conditions on more than 2 points in the framework of bvp4c. But I do not think that you need that. Depending on what you used in the initial approximation, the negative result can mean that there is no solution close to the straight line $y(t)=(t-a)/(b-a)$, any solution has a much more winded (or is that wended?) path, or that there is no solution at all. In general you will need more insight into the rough shape of the solution, like when designing the path of a space probe, you need to know what sling-by maneuvers are to be included. – LutzL Jan 15 at 21:43 • I have a good idea of what the solution should look like. If I was to divided the equation up into 3 sections [a b c], the a and c section will be constant but different values, at the b section the transition from a --> c will occur, I just want to know how it will occur. Any advice on how to impose conditions onto more than one point? – tenichols Jan 16 at 14:30 • You have no control on what the solution will finally look like. But if you can expect that to be the general shape, then setting the values in yinit in that way should be sufficient. If the results do not match the expectations, debug the code and if that is ok, then debug the theory that led to these expectations. – LutzL Jan 16 at 14:38
2019-05-22 23:38:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5114074349403381, "perplexity": 358.32815478735904}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256980.46/warc/CC-MAIN-20190522223411-20190523005411-00290.warc.gz"}
http://paperpo.ml/?p=315
# CS 259D One-Class Training for Masquerade Detection • 背景知识与启发 • Schonlau 数据集 • 机器学习方法 • 朴素贝叶斯分类器 • 单分类 SVM • 局限性 • 参考资料 ## 背景知识与启发 • 用户行为建模 • 检测异常行为 • “伪装者”问题是一个具有挑战性的问题 • 引发假警报 • 多分类 • 每个用户的样本当作一个类别组成数据来源 • 自身与他人 • 用户加入/离开机构时需要重新训练 • 伪装的非自身用户样本会偏离模型的审查 • 单分类 • 仅用该用户的数据,为本用户构建模型 • 需要的数据少 • 分布式实施 ## Schonlau 数据集 • 来自70名用户的 Unix shell 命令 • 使用 Unix acct 收集数据 • 随机选择50名用户作为入侵目标 • 20名伪装用户 • 每个用户15000个命令 • 持续数天或数月 • 前5000个命令是正常的 • 接下来的10000个命令,随机注入100个入侵块命令 • 正常和入侵块的大小:100 • 问题 • 不同用户的时段差异很大 • 每个用户的登录会话次数不同 • 每个用户的入侵块数量不同(0-24) • 用户的工作职能未知 • acct 按照命令完成的次序记录日志 ## 机器学习方法 • 学习任务 • 明确的指认伪装者,而不是特定用户 ### 朴素贝叶斯分类器 • 贝叶斯定理 • 不同的命令是独立的 • 多变量伯努利模型 • Unix 下856个唯一的命令 • 每个块作为一个二进制的N维向量 • 用伯努利建模每一个维度 • 在小词汇量下表现更好 • 多项式模型 • 每个块作为N维向量 • 每个命令出现的次数作为特征 • 在大词汇量下表现更好 • 单分类朴素贝叶斯 • 仅使用用户自身的模型来计算$$p(c_i|u)$$ • 对于伪装者,假定每个命令的概率都是$$1/N$$(完全随机) • 对伪装者不做假设 • 给定一个块 $$d$$,计算: $$p(d|{\rm self})/p(d|{\rm non-self})$$ • 设定阈值控制误报与检测率 ### 单分类 SVM • 将数据映射到高维特征空间 • 最大化分离边界 • 允许异常值,通过参数设置边缘出错的概率 ## 局限性 • 使用二值特征的单分类 SVM 表现的比单分类贝叶斯和使用数值特征的单分类 SVM 要好 • 问题依旧困难,准确性有待提高 • 2-gram 特征表现更差 • 1-gram 和 2-grams 同时使用可能提高性能 • 这个系统不应被用作唯一的检测器 • 需要包含命令参数而不仅仅是命令本身 ## 参考资料 • One-class Training for Masquerade Detection (2003) • CS 259D Lecture 3 ## “CS 259D One-Class Training for Masquerade Detection”的104个回复 1. I just want to say I’m all new to blogs and honestly loved your blog. Almost certainly I’m going to bookmark your website . You surely come with perfect writings. Regards for sharing with us your website page. 2. Do you have a spam problem on this site; I also am a blogger, and I was wanting to know your situation; many of us have developed some nice methods and we are looking to exchange strategies with others, please shoot me an email if interested. 3. ive got alot of free time on my hands recently, so ive decided to start blogging again, . . does anybody know any good blogging sites which are free and easy to use?? (apart from tumbr and blogger/google ). . thanks (:. 4. Hello. remarkable job. I did not anticipate this. This is a excellent story. Thanks! 5. Every person in our household is impressed that a mattress that got here in a roll with the air drew out from it can be such a pleasant mattress, however it really is. 6. HelloGreetingsHey thereHeyGood dayHowdyHi thereHello thereHi! This is my first visit to your blog! We are a groupcollectionteam of volunteers and starting a new initiativeproject in a community in the same niche. Your blog provided us valuableusefulbeneficial information to work on. You have done a marvellousoutstandingextraordinarywonderful job! 7. I need your opinion in buying a laptop..Which do you prefer?? A lenovo?? Or dell?? 8. Revise after nearly two years: Mattress is actually still keeping up fantastic. Wonderful buy for a mattress without awful chemicals. That is incredibly relaxed and also looks to be actually effectively made. 9. Thanks for some other fantastic post. The place else may anyone get that type of info in such an ideal method of writing? I’ve a presentation next week, and I am on the search for such info. 10. I loved as much as you will receive carried out right here. The sketch is tasteful, your authored subject matter stylish. nonetheless, you command get bought an nervousness over that you wish be delivering the following. unwell unquestionably come further formerly again since exactly the same nearly a lot often inside case you shield this increase. 11. I’ve been surfing online more than three hours today, yet I never found any interesting article like yours. It is pretty worth enough for me. In my view, if all webmasters and bloggers made good content as you did, the web will be much more useful than ever before. 12. Hiya very nice blog!! Man .. Excellent .. Amazing .. I will bookmark your site and take the feeds also…I am happy to search out a lot of helpful info right here within the submit, we need develop more techniques on this regard, thanks for sharing. 13. Health and Fitness说道: Merely wanna tell that this is extremely helpful, Thanks for taking your time to write this. 14. Sweet blog! I found it while surfing around on Yahoo News. Do you have any suggestions on how to get listed in Yahoo News? I’ve been trying for a while but I never seem to get there! Thank you 15. I’m very happy to read this. This is the type of manual that needs to be given and not the random misinformation that’s at the other blogs. Appreciate your sharing this best doc. 16. I’m so happy to read this. This is the kind of manual that needs to be given and not the random misinformation that’s at the other blogs. Appreciate your sharing this best doc. 17. I as well as my guys ended up reading through the great helpful hints on your web page and suddenly I had a terrible suspicion I had not expressed respect to the site owner for those techniques. These boys appeared to be as a result stimulated to learn all of them and already have pretty much been having fun with those things. Appreciation for being simply thoughtful and for obtaining this kind of wonderful subjects millions of individuals are really wanting to discover. My honest apologies for not expressing gratitude to earlier. 18. I was just seeking this information for a while. After six hours of continuous Googleing, finally I got it in your website. I wonder what is the lack of Google strategy that do not rank this type of informative sites in top of the list. Normally the top sites are full of garbage. 19. Wow! This could be one particular of the most beneficial blogs We have ever arrive across on this subject. Basically Excellent. I’m also an expert in this topic so I can understand your effort. 20. Health and Fitness说道: I am also commenting to make you know what a impressive discovery my child encountered going through the blog. She even learned a wide variety of pieces, most notably how it is like to have a great teaching spirit to get many more clearly know specific hard to do subject matter. You really exceeded our expectations. Many thanks for providing these powerful, trusted, revealing and cool thoughts on that topic to Janet. 21. Great post. I was checking constantly this blog and I’m impressed! Very helpful information specially the last part 🙂 I care for such information much. I was looking for this particular information for a long time. Thank you and good luck. 22. Hi i’m David Fuertes and i love porn please visit my site 23. Great – I should definitely pronounce, impressed with your website. I had no trouble navigating through all the tabs and related info ended up being truly simple to do to access. I recently found what I hoped for before you know it at all. Reasonably unusual. Is likely to appreciate it for those who add forums or something, web site theme . a tones way for your customer to communicate. Excellent task. 24. I must show my love for your generosity supporting people that require guidance on this important idea. Your personal dedication to getting the solution throughout was really effective and have in most cases allowed workers like me to arrive at their goals. Your new warm and friendly guideline means this much a person like me and much more to my peers. Thank you; from everyone of us. 25. I got what you intend, thanks for putting up.Woh I am happy to find this website through google. “I would rather be a coward than brave because people hurt you when you are brave.” by E. M. Forster. 26. I like the valuable information you provide in your articles. I will bookmark your blog and check again here frequently. I am quite certain I will learn many new stuff right here! Good luck for the next! 27. Thanks a lot for sharing this with all of us you actually recognize what you are talking about! Bookmarked. Kindly additionally discuss with my website =). We will have a link change agreement among us! 28. RETS PRO reviews and information are provided within our websites. RETSPRO serves a fabulous community of developers and real estate agents who require the ultimate flexibility of their RETS output. RETS PRO wordpress plugin is the absolute best software package available for any real estate property website developer. RETSPRO was built especially for web designers to allow them with the resources to fully customize a website for any real-estate clients. 29. I think you have noted some very interesting points , appreciate it for the post. 30. I like the valuable info you provide in your articles. I’ll bookmark your weblog and check again here frequently. I am quite certain I’ll learn plenty of new stuff right here! Best of luck for the next! 31. Excellent read, I just passed this onto a colleague who was doing some research on that. And he just bought me lunch as I found it for him smile Thus let me rephrase that: Thank you for lunch! “Without friends no one would choose to live, though he had all other goods.” by Aristotle. 32. Excellent read, I just passed this onto a friend who was doing a little research on that. And he actually bought me lunch because I found it for him smile Therefore let me rephrase that: Thanks for lunch! 33. I’m not sure where you are getting your information, but great topic. I needs to spend some time learning more or understanding more. Thanks for magnificent information I was looking for this info for my mission. 34. Wow, superb weblog format! How lengthy have you ever been running a blog for? you make blogging glance easy. The entire look of your website is great, as neatly as the content material! 35. Excellent read, I just passed this onto a friend who was doing a little research on that. And he just bought me lunch because I found it for him smile So let me rephrase that: Thanks for lunch! 36. Great write-up, I am normal visitor of one’s blog, maintain up the excellent operate, and It is going to be a regular visitor for a lengthy time. 37. Usually I don’t read article on blogs, however I would like to say that this write-up very forced me to try and do it! Your writing taste has been surprised me. Thank you, very great post. 38. I like what you guys are up too. Such clever work and reporting! Carry on the excellent works guys I’ve incorporated you guys to my blogroll. I think it’ll improve the value of my site :). 39. Thank you for the good writeup. It in fact was a amusement account it. Look advanced to far added agreeable from you! However, how can we communicate? 40. Good – I should certainly pronounce, impressed with your website. I had no trouble navigating through all tabs as well as related information ended up being truly simple to do to access. I recently found what I hoped for before you know it at all. Quite unusual. Is likely to appreciate it for those who add forums or something, web site theme . a tones way for your customer to communicate. Nice task. 41. Wonderful beat ! I would like to apprentice while you amend your web site, how can i subscribe for a blog web site? The account aided me a acceptable deal. I had been a little bit acquainted of this your broadcast offered bright clear idea 42. I’ve been browsing online greater than three hours lately, yet I by no means discovered any fascinating article like yours. It is beautiful price sufficient for me. Personally, if all webmasters and bloggers made excellent content material as you probably did, the web shall be a lot more helpful than ever before. “We are not retreating – we are advancing in another Direction.” by Douglas MacArthur. 43. You actually make it seem so easy with your presentation but I find this matter to be actually something which I think I would never understand. It seems too complicated and very broad for me. I’m looking forward for your next post, I will try to get the hang of it! 44. Hello. fantastic job. I did not anticipate this. This is a fantastic story. Thanks! 45. Wow, incredible blog layout! How long have you been blogging for? you make running a blog glance easy. The whole look of your site is fantastic, as well as the content! 46. Hi, Neat post. There’s a problem along with your website in web explorer, would test this¡K IE still is the marketplace chief and a huge component to other folks will omit your wonderful writing due to this problem. 47. Keep up the fantastic piece of work, I read few blog posts on this site and I conceive that your website is very interesting and holds circles of good info . 48. I like what you guys are up also. Such smart work and reporting! Carry on the superb works guys I’ve incorporated you guys to my blogroll. I think it will improve the value of my site :). 49. I really enjoy looking at on this web site, it contains superb content. “Beware lest in your anxiety to avoid war you obtain a master.” by Demosthenes. 50. If you are going for bestmost excellentfinest contents like meI domyself, onlysimplyjust visitgo to seepay a visitpay a quick visit this websiteweb sitesiteweb page everydaydailyevery dayall the time becausesinceasfor the reason that it providesoffersgivespresents qualityfeature contents, thanks 51. I will immediately take hold of your rss feed as I can’t to find your email subscription link or e-newsletter service. Do you’ve any? Please allow me recognise so that I may subscribe. Thanks. 53. Wow, incredible weblog layout! How lengthy have you ever been blogging for? you make running a blog glance easy. The whole glance of your website is fantastic, as smartly as the content material! 54. Hiya, I’m really glad I’ve found this info. Nowadays bloggers publish only about gossips and web and this is really irritating. A good website with exciting content, this is what I need. Thank you for keeping this website, I will be visiting it. Do you do newsletters? Can’t find it. 55. I’m still learning from you, but I’m trying to achieve my goals. I certainly enjoy reading all that is written on your website.Keep the information coming. I liked it! 56. I’ve been surfing online more than 3 hours today, yet I never found any interesting article like yours. It’s pretty worth enough for me. In my view, if all website owners and bloggers made good content as you did, the web will be a lot more useful than ever before. 57. I am not sure where you’re getting your info, but good topic. I needs to spend some time learning much more or understanding more. Thanks for fantastic information I was looking for this info for my mission. 58. I am really impressed with your writing skills as well as with the layout on your blog. Is this a paid theme or did you customize it yourself? Anyway keep up the excellent quality writing, it’s rare to see a great blog like this one these days.. 59. Merely wanna input on few general things, The website layout is perfect, the content material is really superb : D. 60. I happen to be commenting to let you know what a extraordinary experience my friend’s girl had reading your site. She picked up some things, most notably what it’s like to have an excellent helping heart to let the others really easily know certain complex subject matter. You truly did more than our own expected results. Many thanks for rendering those useful, healthy, explanatory and fun tips on this topic to Mary. 61. Well I sincerely enjoyed reading it. This article provided by you is very useful for proper planning. 62. Normally I do not read article on blogs, however I would like to say that this write-up very compelled me to try and do so! Your writing taste has been surprised me. Thank you, quite great post. 63. Absolutely indited content , thankyou for selective information . 64. But wanna remark that you have a very nice website , I love the style it actually stands out. 65. I intended to write you a very small observation to be able to give many thanks as before for these precious methods you have documented above. This is really unbelievably generous with you to give freely all a few people would have distributed as an electronic book to help with making some profit for their own end, notably since you might have tried it if you decided. These basics additionally worked to become fantastic way to recognize that someone else have a similar dream the same as mine to know a lot more regarding this condition. I know there are many more fun times ahead for individuals that start reading your website. 66. Generally I do not read post on blogs, but I would like to say that this write-up very forced me to take a look at and do so! Your writing style has been surprised me. Thanks, quite great article. 67. It¡¦s really a great and helpful piece of info. I am satisfied that you just shared this useful info with us. Please keep us informed like this. Thanks for sharing. 68. You completed various good points there. I did a search on the topic and found a good number of people will go along with with your blog. 69. Hi, Neat post. There’s an issue with your web site in internet explorer, could check this… IE nonetheless is the market chief and a big part of people will omit your fantastic writing because of this problem. 70. My brother suggested I might like this website. He was totally right. This post truly made my day. You cann’t imagine just how much time I had spent for this info! Thanks! 71. You made some decent points there. I did a search on the issue and found most guys will go along with with your website. 72. Very interesting topic, thank you for putting up. 73. I was suggested this web site by my cousin. I am not sure whether this post is written by him as no one else know such detailed about my problem. You’re incredible! Thanks! 74. Hello there, I discovered your blog by way of Google while searching for a related subject, your site got here up, it appears good. I’ve bookmarked it in my google bookmarks. 75. Only wanna input on few general things, The website pattern is perfect, the subject matter is real great : D. 76. Nice blog here! Also your web site loads up fast! What web host are you using? Can I get your affiliate link to your host? I wish my web site loaded up as quickly as yours lol 77. I relish, result in I discovered just what I used to be having a look for. You’ve ended my 4 day lengthy hunt! God Bless you man. Have a nice day. Bye 78. HiHello my family memberloved onefriend! I want towish to say that this articlepost is awesomeamazing, greatnice written and come withinclude almostapproximately all importantsignificantvital infos. I’dI would like to peerto seeto look moreextra posts like this . 79. Excellent beat ! I would like to apprentice while you amend your site, how could i subscribe for a blog web site? The account helped me a acceptable deal. I had been a little bit acquainted of this your broadcast provided bright clear idea 80. hi!,I like your writing so so much! share we keep in touch more approximately your article on AOL? I need an expert in this house to resolve my problem. May be that is you! Having a look forward to look you. 81. But wanna input on few general things, The website design and style is perfect, the articles is very superb. “If a man does his best, what else is there” by George Smith Patton, Jr.. 82. I just couldn’t depart your website before suggesting that I really loved the standard information a person provide to your visitors? Is going to be back frequently in order to check out new posts. 83. Hello, i think that i saw you visited my weblog so i came to “return the favor”.I’m trying to find things to enhance my site!I suppose its ok to use a few of your ideas!! 84. hello!,I really like your writing so so much! percentage we keep up a correspondence extra approximately your post on AOL? I need an expert in this house to resolve my problem. Maybe that is you! Looking ahead to look you. 85. Great website. Plenty of helpful information here. I’m sending it to several pals ans additionally sharing in delicious. And naturally, thanks for your effort! 86. I have been surfing online more than three hours today, yet I never found any interesting article like yours. It’s pretty worth enough for me. In my opinion, if all web owners and bloggers made good content as you did, the internet will be a lot more useful than ever before. 87. I loved as much as you’ll receive carried out right here. The sketch is tasteful, your authored subject matter stylish. nonetheless, you command get got an edginess over that you wish be delivering the following. unwell unquestionably come more formerly again as exactly the same nearly very often inside case you shield this increase. 88. I have been browsing on-line more than three hours today, but I by no means found any interesting article like yours. It is lovely value enough for me. In my opinion, if all website owners and bloggers made good content as you did, the net might be a lot more useful than ever before. “Oh, that way madness lies let me shun that.” by William Shakespeare. 89. hello!,I really like your writing very a lot! proportion we keep in touch more approximately your article on AOL? I need an expert on this space to solve my problem. Maybe that’s you! Having a look forward to peer you. 90. Hello there, You’ve done a fantastic job. I will certainly digg it and personally suggest to my friends. I’m confident they will be benefited from this website. 91. Hey There. I found your blog using msn. This is a really well written article. I will be sure to bookmark it and come back to read more of your useful info. Thanks for the post. I’ll certainly return. 92. A person essentially assist to make severely articles I might state. That is the first time I frequented your website page and to this point? I amazed with the analysis you made to create this particular post extraordinary. Wonderful job! 93. Enjoyed looking through this, very good stuff, thankyou . “It requires more courage to suffer than to die.” by Napoleon Bonaparte. 94. Hey There. I found your blog using msn. This is a very well written article. I will be sure to bookmark it and return to read more of your useful information. Thanks for the post. I’ll certainly comeback. 95. I¡¦m no longer sure the place you’re getting your info, however good topic. I must spend some time finding out more or figuring out more. Thanks for excellent info I was in search of this information for my mission. 96. Thank you for the auspicious writeup. It in fact was a amusement account it. Look advanced to more added agreeable from you! By the way, how could we communicate? 97. I have recently started a blog, the information you offer on this web site has helped me greatly. Thank you for all of your time & work. “The man who fights for his fellow-man is a better man than the one who fights for himself.” by Clarence Darrow. 98. I do agree with all the ideas you’ve presented in your post. They are very convincing and will definitely work. Still, the posts are very quick for newbies. Could you please lengthen them a little from subsequent time? Thanks for the post. 99. Unquestionably believe that which you said. Your favorite reason seemed to be on the web the easiest thing to be aware of. I say to you, I definitely get irked while people think about worries that they just do not know about. You managed to hit the nail upon the top and defined out the whole thing without having side effect , people can take a signal. Will likely be back to get more. Thanks 100. I have recently started a web site, the information you provide on this web site has helped me tremendously. Thank you for all of your time & work. “The more sand that has escaped from the hourglass of our life, the clearer we should see through it.” by Jean Paul. 101. Hello there, You’ve done a fantastic job. I will certainly digg it and personally recommend to my friends. I am sure they’ll be benefited from this web site. 102. Hello There. I found your blog using msn. This is an extremely well written article. I’ll be sure to bookmark it and return to read more of your useful information. Thanks for the post. I will certainly comeback. 103. It¡¦s really a nice and helpful piece of info. I¡¦m satisfied that you just shared this useful information with us. Please stay us up to date like this. Thank you for sharing.
2018-04-26 05:29:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1700614094734192, "perplexity": 2453.650737031487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948089.47/warc/CC-MAIN-20180426051046-20180426071046-00583.warc.gz"}
http://ptspts.blogspot.com/2009/06/this-blog-post-demonstrates-tex-macro.html
## 2009-06-12 ### Using \romannumeral in TeX to do multiple macro expansions This blog post demonstrates a TeX macro hack: using \romannumeral to expand many macros in a single expansion. The example goal is to define a macro \stars which (when called properly) expands to the specified number of stars as a single expansion. Using a single expansion only makes the macro work in an \expandafter\def...{...} context. The similar macro given in The TeXbook doesn't work in this context because it needs multiple expansions. \documentclass{article} %** Usage: \romannumeral\stars{NUMSTARS} %** This expands to a NUMSTARS stars (* tokens) in a single expansion step. \def\stars#1{% \expandafter\mtostar\expandafter{\expandafter}\romannumeral\number#1 000z} \def\firstoftwo#1#2{#1}% \def\secondoftwo#1#2{#2}% \def\mtostar#1#2{% \ifx#2z\expandafter\firstoftwo\else\expandafter\secondoftwo\fi {0 #1}{\mtostar{#1*}}% } \begin{document} \expandafter\def\expandafter\mystars\expandafter{\romannumeral\stars{4}}% \texttt{(\meaning\mystars)} \end{document}
2018-09-18 21:17:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920011162757874, "perplexity": 11443.821217480752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155702.33/warc/CC-MAIN-20180918205149-20180918225149-00500.warc.gz"}
http://en.wikipedia.org/wiki/Arbelos
# Arbelos A shoemaker's knife. In geometry, an arbelos is a plane region bounded by a semicircle of diameter 1, connected at the corners to semicircles of diameters r and (1 − r), all rising above a common baseline. Archimedes is believed to be the first mathematician to study its mathematical properties, as it appears in propositions four through eight of his Book of Lemmas. Arbelos literally means "shoemaker's knife" in Greek; it resembles the blade of a knife used by cobblers from antiquity to the current day.[1] ## Properties ### Area A circle with a diameter HA is equal in area with the arbelos #### Proof If BC = 1 and BA = r, then • In triangle BHA: $r^2+h^2=x^2$ • In triangle CHA: $(1-r)^2+h^2=y^2$ • In triangle BHC: $x^2+y^2=1$ By substitution: $y^2=(1-r)^2+x^2-r^2$. By expansion: $y^2=1-2r+x^2$. By substituting for y2 into the equation for triangle BHC and solving for x: $x=\sqrt{r}$ By substituting this, solve for y and h $y=\sqrt{1-r}$ $h=\sqrt{r-r^2}$ The radius of the circle with center O is: $\frac{1}{2}\sqrt{r-r^2}.$ Therefore, the area is: $A_{circle}=\pi\left(\frac{1}{2}\sqrt{r-r^2}\right)^2$ $A_{circle}=\frac{\pi r}{4}-\frac{\pi r^2}{4}$ The area of the arbelos is the area of the large semicircle minus the area of the two smaller semicircles. Therefore the area of the arbelos is: $A_{arbelos}=\frac{\pi}{8}-\left(\frac{\pi}{2}\left(\frac{r}{2}\right)^2+\frac{\pi}{2}\left(\frac{1-r}{2}\right)^2\right)$ $A_{arbelos}=\frac{\pi-\pi r^2-\pi+2\pi r-\pi r^2}{8}$ $A_{arbelos}=\frac{\pi r}{4}-\frac{\pi r^2}{4}=A_{circle}$ Q.E.D.[2] This property appears as Proposition 4 in Archimedes' Book of Lemmas: If AB be the diameter of a semicircle and N any point on AB, and if semicircles be described within the first semicircle and having AN, BN as diameters respectively, the figure included between the circumferences of the three semicircles is [what Archimedes called "αρβελος"]; and its area is equal to the circle on PN as diameter, where PN is perpendicular to AB and meets the original semicircle in P. [3] ### Rectangle The segment BH intersects the semicircle BA at D. The segment CH intersects the semicircle AC at E. Then DHEA is a rectangle. Proof: Angles BDA, BHC, and AEC are right angles because they are inscribed in semicircles (by Thales' theorem). The quadrilateral ADHE therefore has three right angles, so it is a rectangle. Q.E.D. ### Tangents The line DE is tangent to semicircle BA at D and semicircle AC at E. Proof: Since angle BDA is a right angle, angle DBA equals π/2 minus angle DAB. However, angle DAH also equals π/2 minus angle DAB (since angle HAB is a right angle). Therefore triangles DBA and DAH are similar. Therefore angle DIA equals angle DOH, where I is the midpoint of BA and O is the midpoint of AH. But AOH is a straight line, so angle DOH and DOA are supplementary angles. Therefore the sum of angles DIA and DOA is π. Angle IAO is a right angle. The sum of the angles in any quadrilateral is 2π, so in quadrilateral IDOA, angle IDO must be a right angle. But ADHE is a rectangle, so the midpoint O of AH (the rectangle's diagonal) is also the midpoint of DE (the rectangle's other diagonal). As I (defined as the midpoint of BA) is the center of semicircle BA, and angle IDE is a right angle, then DE is tangent to semicircle BA at D. By analogous reasoning DE is tangent to semicircle AC at E. Q.E.D.
2014-03-11 00:14:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7828732132911682, "perplexity": 2228.739814986964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011062835/warc/CC-MAIN-20140305091742-00059-ip-10-183-142-35.ec2.internal.warc.gz"}
https://shanexuan.com/2016/08/
# Emulate ggplot using plot(): logit regression I find using ggplot for logistic regression confusing, and I’ve already spent quite some time coding my own template for logit regression in the past, so here’s a quick “fix” to make your simple plots produced by plot() emulate ggplot! Why would one bother to do this? Because whoever doesn’t think iris blue (#00BFC4) is pretty has a problem. The coral-ish color can be defined by col=rgb(248/256, 118/256, 109/256), and the iris blue can be defined by col=rgb(0, 191/256, 196/256). The figure below was generated by (1) first plotting the raw data of response variable and its covariates while suppressing all labels (specifically, “ann” and “axes”), and (2) then adding the logit plot to the original one using “par(new=TRUE)” Maybe it saves more time if I just spend ten minutes reading ggplot’s grammar while plotting logit regression… Figure: Emulating ggplot using plot() # Loop through files to loop through variables! Say you have a bunch of data files formatted in exactly the same way (which is not rare if you are scraping or if the data are clean), how do you loop through all the files at once, extract all the useful information, and bind them to a big matrix? Consider the following code: Suppose all my files are named “1.csv”, …, “5.csv”, and we loop through files by file.names <- c("1", "2", "3", "4", "5") for (i in 1:length(file.names)) { data <- readLines(paste(file.names[i], "csv", sep = ".")) } Oftentimes you would need to reshape your data. Suppose we are looking at such data year place1 place2 1999 1.1 7.8 ... An efficient way to reshape your data is to write a melt function: my.melt <- function(x){ x <- melt(x, id.vars=c('year'), variable.name='place') x } Since all the files are the same, we get a  long list of variables that have the same dimension. Thus, we can merge all of them. Consider the example where I want to merge two of my variables: var.names=list(var1, var2) for (i in 1:length(var.names)){ var.names[[i]] <- my.melt(var.names[[i]]) } Alternatively, you can use lapply() reshape <- lapply(var.names, my.melt) Now, we need to cbind() all our data: datalist = list() # create empty list for (i in 1:5) { datalist[[i]] <- reshape[[i]] } merge <- do.call(cbind, datalist) names(merge) <- c(var.names) Definitely not the smartest way — but it works. # A quick note on causal effect This post is a quick note as I have been reading papers and books on causal inference recently. I am sure that the materials are extremely intuitive to most social scientists, but I hope some of my notes could help the beginning grad students to quickly understand the notion of causal effect. I recently came across a paper by Barnow, Cain, and Goldberger, that was published thirty-six years ago. In their paper, they talked about how to operationalize the following equation $y=\alpha z+w+\varepsilon,$ where $\alpha$ is true treatment effect, $z$ is treatment status, $y$ is the outcome, and $w$ is an unobserved variable, with random term $\varepsilon.$ The basic idea is to introduce observable variables that determine the assignment in the equation: “Assume that an observed variable, $t$, was used to determine assignment into the treatment group and the control group… [S]ince $t$ is the only systematic determinant of treatment status, $t$ will capture any correlation between $z$ and $w$. Thus, the observed $t$ could replace the unobserved $w$ as the explanatory variable.” To understand their argument, let’s quickly review the conditional independence assumption (CIA). The CIA states that conditional on $t,$ the outcomes are independent of treatments, that is, $\{y_{0},y_{1}\}\perp\!\!\!\perp z|t.$ Suppose we have $\underbrace{ E[y|z=1] - E[y|z=0]}_{\text{observed difference}} = \underbrace{ E[y_{1}-y_{0}|z=1]}_{\text{treatment effect}} + \underbrace{ (E[y_{0}|z=1] - E[y_{0}|z=0])}_{\text{selection bias}}.$ However, the selection bias is undesirable in our contexts. Conditional on $t,$ we obtain ${ E[y|t,z=1] - E[y|t,z=0]} = { E[y_{1}-y_{0}|t}].$ In this way, selection bias disappears! Now, let’s go back to Barnow, Cain, and Goldberger’s equation — $y=\alpha z+w+\varepsilon.$ With the CIA, we can decompose $w$ into $w=\beta t + \varepsilon^*,$ where $\beta$ is a vector of population regression coefficients that is assumed to satisfy $E[w|t]=\beta t .$ That is, $y=\alpha z +\beta t + \varepsilon,$ where $\alpha$ is the causal effect.
2017-11-20 05:34:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34432196617126465, "perplexity": 2172.306381615467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805914.1/warc/CC-MAIN-20171120052322-20171120072322-00346.warc.gz"}
https://hal.inria.fr/hal-02062891v2
Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers - Archive ouverte HAL Access content directly Journal Articles Analysis and Applications Year : 2020 ## Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers (1) , (2, 3) , (4) , (5) 1 2 3 4 5 Simon Foucart • Function : Author Rémi Gribonval Laurent Jacques • Function : Author • PersonId : 870853 Holger Rauhut • Function : Author • PersonId : 858949 #### Abstract We investigate the problem of recovering jointly $r$-rank and $s$-bisparse matrices from as few linear measurements as possible, considering arbitrary measurements as well as rank-one measurements. In both cases, we show that $m \asymp r s \ln(en/s)$ measurements make the recovery possible in theory, meaning via a nonpractical algorithm. In case of arbitrary measurements, we investigate the possibility of achieving practical recovery via an iterative-hard-thresholding algorithm when $m \asymp r s^\gamma \ln(en/s)$ for some exponent $\gamma>0$. We show that this is feasible for $\gamma=2$ and that the proposed analysis cannot cover the case $\gamma \leq 1$. The precise value of the optimal exponent $\gamma \in [1,2]$ is the object of a question, raised but unresolved in this paper, about head projections for the jointly low-rank and bisparse structure. Some related questions are partially answered in passing. For the rank-one measurements, we suggest on arcane grounds an iterative-hard-thresholding algorithm modified to exploit the nonstandard restricted isometry property obeyed by this type of measurements. #### Domains Computer Science [cs] Signal and Image Processing ### Dates and versions hal-02062891 , version 1 (11-03-2019) hal-02062891 , version 2 (25-10-2019) ### Identifiers • HAL Id : hal-02062891 , version 2 • ARXIV : • DOI : ### Cite Simon Foucart, Rémi Gribonval, Laurent Jacques, Holger Rauhut. Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers. Analysis and Applications, 2020, 18 (01), pp.25--48. ⟨10.1142/S0219530519410094⟩. ⟨hal-02062891v2⟩ ### Export BibTeX TEI Dublin Core DC Terms EndNote Datacite 176 View
2023-01-30 19:18:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39511024951934814, "perplexity": 8223.115298026549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00395.warc.gz"}
https://terradifm.spacecrafted.com/services/hardscape
Hardscape Click here to add or edit your text. You can also add bold or italic font treatments, bulleted lists or text links by highlighting the text you would like to change and clicking the appropriate selection in the pop-up. Deck Design Click here to add or edit your text. You can also add bold or italic font treatments, bulleted lists or text links by highlighting the text you would like to change and clicking the appropriate selection in the pop-up. Gazebo Design Click here to add or edit your text. You can also add bold or italic font treatments, bulleted lists or text links by highlighting the text you would like to change and clicking the appropriate selection in the pop-up. Retaining Wall Design Click here to add or edit your text. You can also add bold or italic font treatments, bulleted lists or text links by highlighting the text you would like to change and clicking the appropriate selection in the pop-up. Price Match Guarantee Click here to add or edit your text. You can also add bold or italic font treatments, bulleted lists or text links by highlighting the text you would like to change and clicking the appropriate selection in the pop-up. Free, no obligation quote.
2019-03-22 04:48:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273367285728455, "perplexity": 2141.123828079961}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00123.warc.gz"}
https://mathematica.stackexchange.com/questions/226339/error-on-using-compile
# Error on using Compile[] My code is below. deltaX = 1/128; W=256; Mmax=40; lPoly = ParallelTable[ LegendreP[ order, (-1.) + (deltaX/2.) + ((index - 1.)*deltaX)], {order, 0, Mmax}, {index, 1, W}]; XPoly = Compile[{{index, _Integer}}, Block[{}, polyMatrix = PadRight[Table[lPoly[[m - n + 1, index]], {m, 0, Mmax}, {n, 0, m}]]; polyMatrix], CompilationTarget -> "C", RuntimeAttributes -> {Listable}, Parallelization -> True, RuntimeOptions -> {"CatchMachineIntegerOverflow" -> False}]; If I run XPoly[1], it will return: CompiledFunction::cfse : Compiled expression {{1.},{-0.996094,1.},<<48>>,<<71>>} should be a machine-size real number. I have encountered this kinda error multiple times, sometimes it solved. But I dont know why. • If you decide to turn to Compile, I suggest starting from reading this post: mathematica.stackexchange.com/a/104031/1871 And now you're against rule (2) and rule (6) there. – xzczd Jul 22 '20 at 3:10 • The main problem is that your Table constructs a ragged array (rows are are not all the same length). Ragged arrays are not allowed in Compile. – Michael E2 Jul 22 '20 at 3:42 • @MichaelE2 Thanks Mike. Actually I use PadRight here. For avoiding confusion, I didnt post it. If you mean constructing ragged array is not allowed, it would solve my problem. – PalvinWang Jul 22 '20 at 4:34 • Even with PadRight, you first generate a ragged array before it is padded. But you can apply PadRight outside of Compile to generate XPoly as rectangular array. Also make sure that all 0 are actually floating point 0.. (I am not sure whether Compile is clever enough to convert them on its own.) – Henrik Schumacher Jul 22 '20 at 4:55 This should work better: It generates a rectangular array (filled with zeroes) first and than fills in the entries: deltaX = 1./128; W = 256; Mmax = 40; lPoly = DeveloperToPackedArray[ Table[ LegendreP[order, -1. + 0.5 deltaX + (index - 1.) deltaX], {order, 0, Mmax}, {index, 1, W}] ]; XPoly = Compile[{{index, _Integer}, {lPoly, _Real, 2}}, Block[{polyMatrix, Mmax}, Mmax = Length[lPoly] - 1; polyMatrix = Table[0., {Mmax + 1}, {Mmax + 1}]; Do[ polyMatrix[[m + 1, n + 1]] = lPoly[[m - n + 1, index]] , {m, 0, Mmax}, {n, 0, m}]; polyMatrix ], CompilationTarget -> "C", RuntimeAttributes -> {Listable}, Parallelization -> True ]; test = XPoly[{1, 2, 3}, lPoly]; ` • Thanks a lot Henrik! – PalvinWang Jul 22 '20 at 12:27 • You're welcome! – Henrik Schumacher Jul 22 '20 at 12:30
2021-07-24 05:44:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2971234619617462, "perplexity": 8587.312269352156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00704.warc.gz"}
http://mail.theinfolist.com/php/SummaryGet.php?FindGo=Membrane
TheInfoList A membrane is a selective barrier; it allows some things to pass through but stops others. Such things may be molecules, ions, or other small particles. Biological membranes include cell membranes (outer coverings of cells or organelles that allow passage of certain constituents);[1] nuclear membranes, which cover a cell nucleus; and tissue membranes, such as mucosae and serosae. Synthetic membranes are made by humans for use in laboratories and industry (such as chemical plants). This concept of a membrane has been known since the eighteenth century but was used little outside of the laboratory until the end of World War II. Drinking water supplies in Europe had been compromised by the war and membrane filters were used to test for water safety. However, due to the lack of reliability, slow operation, reduced selectivity and elevated costs, membranes were not widely exploited. The first use of membranes on a large scale was with micro-filtration and ultra-filtration technologies. Since the 1980s, these separation processes, along with electrodialysis, are employed in large plants and, today, several experienced companies serve the market.[2] The degree of selectivity of a membrane depends on the membrane pore size. Depending on the pore size, they can be classified as microfiltration (MF), ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO) membranes. Membranes can also be of various thickness, with homogeneous or heterogeneous structure. Membranes can be neutral or charged, and particle transport can be active or passive. The latter can be facilitated by pressure, concentration, chemical or electrical gradients of the membrane process. Membranes can be generally classified into synthetic membranes and biological membranes.[3] ## Membrane processes classifications ### Microfiltration (MF) Microfiltration removes particles higher than 0.08-2 µm and operates within a range of 7-100 kPa.[4] Microfiltration is used to remove residual suspended solids (SS), to remove bacteria in order to condition the water for effective disinfection and as a pre-treatment step for reverse osmosis. Relatively recent developments are membrane bioreactors (MBR) which combine microfiltration and a bioreactor for biological treatment. ### Ultrafiltration (UF) Ultrafiltration removes particles higher than 0.005-2 µm and operates within a range of 70-700kPa.[4] Ultrafiltration is used for many of the same applications as microfiltration. Some ultrafiltration membranes have also been used This concept of a membrane has been known since the eighteenth century but was used little outside of the laboratory until the end of World War II. Drinking water supplies in Europe had been compromised by the war and membrane filters were used to test for water safety. However, due to the lack of reliability, slow operation, reduced selectivity and elevated costs, membranes were not widely exploited. The first use of membranes on a large scale was with micro-filtration and ultra-filtration technologies. Since the 1980s, these separation processes, along with electrodialysis, are employed in large plants and, today, several experienced companies serve the market.[2] The degree of selectivity of a membrane depends on the membrane pore size. Depending on the pore size, they can be classified as microfiltration (MF), ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO) membranes. Membranes can also be of various thickness, with homogeneous or heterogeneous structure. Membranes can be neutral or charged, and particle transport can be active or passive. The latter can be facilitated by pressure, concentration, chemical or electrical gradients of the membrane process. Membranes can be generally classified into synthetic membranes and biological membranes.[3] Microfiltration removes particles higher than 0.08-2 µm and operates within a range of 7-100 kPa.[4] Microfiltration is used to remove residual suspended solids (SS), to remove bacteria in order to condition the water for effective disinfection and as a pre-treatment step for reverse osmosis. Relatively recent developments are membrane bioreactors (MBR) which combine microfiltration and a bioreactor for biological treatment. ### Ultrafiltration (UF) Ultrafiltration removes particles higher than 0.005-2 µm and operates within a range of 70-700kPa.[4] Ultrafiltration is used for many of the same applications as microfiltration. Some ultrafiltration membranes have also been used to remove dissolved compounds with high molecular weight, such as proteins and carbohydrates. Also, they can remove viruses and some endotoxins. The wall of an ultrafiltration hollow fiber membrane, with characteristic outer (top) and inner (bottom) layers of pores. ### Nanofiltration (NF) Nanofiltration is also known as “loose” RO and can reject particles smaller than 0,002 µm. Nanofiltration is used for the removal of selected dissolved constituents from wastewater. NF is primarily developed as a membrane softening process which offers an alternative to chemical softening. Likewise, nanofiltration can be used as a pre-treatment before directed reverse osmosis. The main objectives of NF pre-treatment are:[5] (1). minimize particulate and microbial fouling of the RO membranes by removal of turbidity and bacteria, (2) prevent scaling by removal of the hardness ions, (3) lower the operating pressure of the RO process by reducing the feed-water total dissolved solids (TDS) concentration. ### Reverse osmosis (RO) Nanofiltration is also known as “loose” RO and can reject particles smaller than 0,002 µm. Nanofiltration is used for the removal of selected dissolved constituents from wastewater. NF is primarily developed as a membrane softening process which offers an alternative to chemical softening. Likewise, nanofiltration can be used as a pre-treatment before directed reverse osmosis. The main objectives of NF pre-treatment are:[5] (1). minimize particulate and microbial fouling of the RO membranes by removal of turbidity and bact Nanofiltration is also known as “loose” RO and can reject particles smaller than 0,002 µm. Nanofiltration is used for the removal of selected dissolved constituents from wastewater. NF is primarily developed as a membrane softening process which offers an alternative to chemical softening. Likewise, nanofiltration can be used as a pre-treatment before directed reverse osmosis. The main objectives of NF pre-treatment are:[5] (1). minimize particulate and microbial fouling of the RO membranes by removal of turbidity and bacteria, (2) prevent scaling by removal of the hardness i Likewise, nanofiltration can be used as a pre-treatment before directed reverse osmosis. The main objectives of NF pre-treatment are:[5] (1). minimize particulate and microbial fouling of the RO membranes by removal of turbidity and bacteria, (2) prevent scaling by removal of the hardness ions, (3) lower the operating pressure of the RO process by reducing the feed-water total dissolved solids (TDS) concentration. Reverse osmosis is commonly used for desalination. As well, RO is commonly used for the removal of dissolved constituents from wastewater remaining after advanced treatment with microfiltration. RO excludes ions but requires high pressures to produce deionized water (850-7000 kPa). ### Nanostructured Membranes An emerging cl An emerging class of membranes rely on nanostructure channels to separate materials at the molecular scale. These include carbon nanotube membranes, graphene membranes, membranes made from polymers of intrinsic microporosity (PIMS), and membranes incorporating metal-organic-frameworks (MOFs). These membranes can be used for size selective separations such as nanofiltration and reverse osmosis, but also adsorption selective separations such as olefins from paraffins and alcohols from water that traditionally have required expensive and energy intensive distillation. ## Membrane configurations In the membrane field, the term module is used to describe a complete unit composed of the membranes, the pressure support structure, the feed inlet, the outlet permeate and retentate streams, and an overall support structure. The principal types of membrane modules are: • Tubular, where membranes are placed inside a support porous tubes, and these tubes are placed together in a cylindrical shell to form the unit module. Tubular devices are The key elements of any membrane process relate to the influence of the following parameters on the overall permeate flux are: • The membrane permeability (k) • The operational driving force per unit membrane area (Trans Membrane Pressure, TMP) • The fouling and subsequent cleaning of the membrane surface. ### Flux, pressure, permeability The total permeate flow from a membrane system is given by following equation: ${\displaystyle Q_{p}=F_{w}\cdot A}$ Where Qp is the permeate stream flowrate [kg·s−1], Fw is the water flux rate [kg·m−2·s−1] and A is the membrane area [m2] The permeability (k) [m·s−2·bar−1] of a membrane is given by the next equation: ${\displaystyle k={F_{w} \over P_{TMP}}}$
2021-03-03 02:52:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 3, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.568481981754303, "perplexity": 5486.0780578069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365186.46/warc/CC-MAIN-20210303012222-20210303042222-00550.warc.gz"}
https://blog.ai.aioz.io/guides/computer-vision/IntroductiontoDiffusionModels_31/
# Introduction to Diffusion Models A kind introduction to Diffusion Models and its related information. ## #Introduction Recently, diffusion models have demonstrated remarkable performance on several generative tasks, ranging from unconditional or text-guided image generation, audio synthesis, natural language generation, human motion synthesis to 3D pointcloud generation, most of which were previously dominated by GAN-based models. Research also has shown that diffusion models are considerably better than GANs in terms of generation quality while also being able to control the results [1]. In this blog, we will go through the mathematics and intuition behind this compelling generative model. Figure 1. Overview of Diffusion Model. ## #Mechanism behind Diffusion Model The underlying principle of diffusion model is based on a popular idea of non-equilibrium statistical thermodynamics: we use a Markov chain to gradually convert one distribution into another [2]. Intuitively, it is called "diffusion" because in this model, data points in our original data space are gradually diffused from one position into another until reaching a state that is just a set of noise. Unlike the common VAE or GAN models where the data is typically compressed into a low dimensional latent space, diffusion models are learned with a fixed procedure and with high dimensional latent variable (same as the original data, see Figure 1). Now let's define the forward diffusion process. ### #Forward diffusion process Figure 2. Example of forward diffusion Given a sample from our real data distribution $x_0 \sim q(x_0)$, we define the forward diffusion process as a Markov chain, which gradually adds random Gaussian noise (with diagonal covariance) to the data under a pre-defined schedule up to $t=1,2,...,T$ steps. Specifically, the distribution of the noised sample at step $t$ only depends on the previous step $t-1$: $\color{red}{q(x_t | x_{t-1}) = \mathcal{N}(x_t; \sqrt{1-\beta_t} x_t, \beta_tI)} \tag{1}$ where $\beta_t$ is the noise schedule controlling the step size or how slow the noises are added in the process. Typically, they are between $(0,1)$ and should hold the followings: $0 < \beta_1 < \beta_2 < ... < \beta_T < 1$ An interesting property of the forward process, which makes it a very powerful generative model, is that when the noise step size is small enough and $T\rightarrow \infty$, the distribution at the final step $x_T$ becomes a standard Gaussian distribution $\mathcal{N}(0,I)$. In other words, we can easily sample from this (pure) noise distribution and then follow a reverse process to reconstruct the real data sample, which we will discuss later. Another notable property of this process is that we can directly sample $x_t$ at any time step $t$ without going through the whole chain. Specifically, let $\alpha_t = 1 -\beta_t$, from Eq. 1, we can derive the sample using reparameterization trick $(x\sim\mathcal{N}(\mu,\sigma^2) \Leftrightarrow x = \mu + \sigma\epsilon )$ as follow: $x_t = \sqrt{\alpha_t}x_{t-1} + \sqrt{1 - \alpha_t}\epsilon_{t-1} \\ = \sqrt{\alpha_t} \left(\sqrt{\alpha_{t-1}}x_{t-2} + \sqrt{1 - \alpha_{t-1}}\epsilon_{t-2} \right) + \sqrt{1 - \alpha_t}\epsilon_{t-1} \\ = \sqrt{\alpha_t \alpha_{t-1}}x_{t-2} + \left[\sqrt{\alpha_t(1 - \alpha_{t-1})}\epsilon_{t-2} + \sqrt{1 - \alpha_t}\epsilon_{t-1}\right]$ where $\epsilon \sim \mathcal{N}(0,I)$. Note that when we add two Gaussian with variances $\sigma_1^2$ and $\sigma_2^2$ , the merged distribution is also a Gaussian and should have variance $\sigma_1^2 + \sigma_2^2$. Therefore, we obtain the merged variance (the right sum $[...]$ of the above equation) as: $\alpha_t(1 - \alpha_{t-1}) + 1-\alpha_t = 1-\alpha_t\alpha_{t-1}$ Thus $x_t$ can be written as: $x_t = \sqrt{\alpha_t \alpha_{t-1}} x_{t-2} + \sqrt{1 - \alpha_t \alpha_{t-1}} \bar{\epsilon}_{t-2} = \dots \\ x_t = \sqrt{\alpha_t \alpha_{t-1}\dots\alpha_{1}} x_0 + \sqrt{1 - \alpha_t \alpha_{t-1}\dots\alpha_{1}} \epsilon \\ x_t = \sqrt{\bar{\alpha}_t} x_0 + \sqrt{1-\bar{\alpha}_t}\epsilon \tag{2}$ where $\bar{\alpha}_t = \prod_{i=1}^t \alpha_i$. We can also re-write the forward diffusion as a distribution depended on $x_0$ as: $\color{red}{q(x_t \vert x_0) = \mathcal{N}(x_t; \sqrt{\bar{\alpha}_t} x_0, (1 - \bar{\alpha}_t)I)} \tag{3}$ This nice property can make the training much more efficient since we only need to compute the loss at some random timesteps $t$ instead of a whole process, which we will see later. ### #Reverse diffusion process We have seen that the forward diffusion essentially pushes a sample off the data manifold and slowly transforms it into noise. The main goal of the reverse process is to learn a trajectory back to the manifold and thus reconstruct the sample in the original data distribution from the noise. Feller et. al [3] demonstrated that for Gaussian diffusion process with small step size $\beta$, the reverse of the diffusion process has the identical functional form as the forward process. This means that $q(x_t|x_{t-1})$ is a Gaussian distribution, and if $\beta_t$ is small, then $q(x_{t-1}|x_t)$ will also be a Gaussian distribution. Unfortunately, estimating reverse distribution $q(x_{t-1}|x_t)$ is very difficult because in early steps of the reverse process ($t$ is near $T$), our sample is just a random noise with high variance. In other words, there exist too many trajectories for us to arrive at our original data $x_0$ from that noise, which we may need to use the entire dataset to integrate. And ultimately, our final goal is to learn a model to approximate these reverse distributions in order to build the generative reverse denoising process, specifically: $p_\theta(x_{0:T}) = p(x_T) \prod^T_{t=1} p_\theta(x_{t-1} | x_t), \quad p_\theta(x_{t-1} | x_t) = \mathcal{N}(x_{t-1}; \mu_\theta(\mathbf{x}_t, t), \Sigma_\theta(\mathbf{x}_t, t))$ where $\mu_\theta(\mathbf{x}_t, t)$ is the mean predicted by the model (a neural network with parameters $\theta$), and $\Sigma_\theta(\mathbf{x}_t, t)$ is the variance that is typically kept fixed for efficient learning. Fortunately, by conditioning on $x_0$, the posterior of the reverse process is tractable and becomes a Gaussian distribution [4]. Our target now is to find the posterior mean $\tilde{\mu}_t(x_t,x_0)$ and the variance $\tilde{\beta}_t$. Following Bayes rule, we can write: $q(x_{t-1} | x_t, x_0) = \frac{ q(x_t | x_{t-1}, x_0) q(x_{t-1} | x_0) }{ q(x_t | x_0) }$ where $\begin{cases} q(x_t | x_{t-1}, x_0) = q(x_t | x_{t-1}) = \mathcal{N}(x_t; \sqrt{\alpha_t} x_{t-1}, \beta_t I), \\ q(x_{t-1} | x_0) = \mathcal{N}(x_{t-1}; \sqrt{\bar{\alpha}_{t-1}}x_0, (1-\bar{\alpha}_{t-1})I), \\ q(x_t | x_0) = \mathcal{N}(x_t; \sqrt{\bar{\alpha}_t} x_0, (1 - \bar{\alpha}_t)I) \end{cases}$ $q(x_t | x_{t-1}, x_0) = q(x_t | x_{t-1})$ because the process is a Markov chain, where the future state ($x_t$) is conditionally independent with all previous states ($x_0,\dots,x_{t-2}$) given the present state ($x_{t-1}$). We also leverage the fact that both of the distributions on the right hand side are Gaussian, which means we can apply the density function ($\mathcal{N}(x;\mu,\sigma) \propto \exp(-\frac{(x-\mu)^2}{2\sigma^2})$) as follow: $q(x_{t-1} | x_t, x_0) \propto\exp \left[ -\frac{(x_t - \sqrt{\alpha_t} x_{t-1})^2}{2\beta_t} - \frac{(x_{t-1} - \sqrt{\bar{\alpha}_{t-1}} x_0)^2}{2(1-\bar{\alpha}_{t-1})} + \frac{(x_t - \sqrt{\bar{\alpha}_t} x_0)^2}{2(1-\bar{\alpha}_t)} \big)\right] \\ = \exp \left[ -\frac{1}{2} \left( \frac{x_t^2 - 2\sqrt{\alpha_t} x_t x_{t-1} + \alpha_t x_{t-1}^2 }{\beta_t} + \frac{ x_{t-1}^2 - 2 \sqrt{\bar{\alpha}_{t-1}} x_0 x_{t-1} + \bar{\alpha}_{t-1} x_0^2}{1-\bar{\alpha}_{t-1}} - \frac{(x_t - \sqrt{\bar{\alpha}_t} x_0)^2}{1-\bar{\alpha}_t} \right) \right]\\ = \exp \left[ -\frac{1}{2} \left( \frac{\alpha_t x^2_{t-1}}{\beta_t} + \frac{x^2_{t-1}}{1-\bar{\alpha}_{t-1}} - \frac{2\sqrt{\alpha_t} x_t x_{t-1} }{\beta_t} - \frac{2 \sqrt{\bar{\alpha}_{t-1}} x_0 x_{t-1}}{1-\bar{\alpha}_{t-1}} \right) + O(x_0,x_t) \right]$ where $O(x_0,x_t)$ is a function only depending on $x_0$ and $x_t$ but not $x_{t-1}$, so it can be effectively ignored. Therefore we have: $q(x_{t-1} | x_t, x_0) \propto \exp\left[ -\frac{1}{2} \left( (\frac{\alpha_t}{\beta_t} + \frac{1}{1 - \bar{\alpha}_{t-1} } )\color{green}{x^2_{t-1}} - 2 (\frac{\sqrt{\alpha_t}}{\beta_t} x_t + \frac{\sqrt{\bar{\alpha}_{t-1}}}{1 - \bar{\alpha}_{t-1}} x_0)\color{orange}{x_{t-1}} \right) \right]$ Comparing with the Gaussian density function $\exp(-\frac{(x-\mu)^2}{2\sigma^2}) = \exp(-\frac{1}{2} (\frac{\color{green}{x^2}}{\sigma^2} -\frac{2\mu \color{orange}{x}}{\sigma^2} + \frac{\mu^2}{\sigma^2}))$, we can see some similaries. Recall that $\bar{\alpha}_t = \prod_{i=1}^t\alpha_i = \alpha_t\bar{\alpha}_{t-1}$ and $\beta_t + \alpha_t = 1$. Matching the variance term, we have the posterior variance: $\color{green}{\tilde{\beta}_t} = \frac{1}{\frac{\alpha_t}{\beta_t} + \frac{1}{1 - \bar{\alpha}_{t-1}}} = \frac{1}{\frac{\alpha_t - \bar{\alpha}_t + \beta_t}{\beta_t(1 - \bar{\alpha}_{t-1})}} = {\frac{1 - \bar{\alpha}_{t-1}}{1 - \bar{\alpha}_t} \cdot \beta_t} \tag{4}$ We also obtain the mean: $\frac{\color{orange}{\tilde{\mu}_t(x_t,x_0)}}{\color{green}{\tilde{\beta}_t}} = \frac{\sqrt{\alpha_t}}{\beta_t} x_t + \frac{\sqrt{\bar{\alpha}_{t-1} }}{1 - \bar{\alpha}_{t-1}} x_0 \\ \rightarrow \tilde{\mu}_t(x_t,x_0) = (\frac{\sqrt{\alpha_t}}{\beta_t} x_t + \frac{\sqrt{\bar{\alpha}_{t-1} }}{1 - \bar{\alpha}_{t-1}} x_0)\beta_t\\ = (\frac{\sqrt{\alpha_t}}{\beta_t} x_t + \frac{\sqrt{\bar{\alpha}_{t-1} }}{1 - \bar{\alpha}_{t-1}} x_0) {\frac{1 - \bar{\alpha}_{t-1}}{1 - \bar{\alpha}_t} \cdot \beta_t}$ $\rightarrow \color{orange}{\tilde{\mu}_t(x_t,x_0)} = \frac{\sqrt{\alpha_t}(1 - \bar{\alpha}_{t-1})}{1 - \bar{\alpha}_t} x_t + \frac{\sqrt{\bar{\alpha}_{t-1}}\beta_t}{1 - \bar{\alpha}_t} x_0 \tag{5}$ Now we obtain the final formula for the reverse distribution: $\color{blue}{q(x_{t-1} | x_t, x_0) = \mathcal{N}(x_{t-1}; \tilde{\mu}_t(x_t, x_0), \tilde{\beta}_t I)} \tag{6}$ So far we have provided the core concepts behind the two essential constituents of diffusion model: forward process and reverse process. In the next post, we will go into detail about the training loss as well as the sampling algorithm of diffusion model. ## #References [1] Dhariwal P, Nichol A. Diffusion models beat gans on image synthesis. In NIPS 2021 [2] Sohl-Dickstein J, Weiss E, Maheswaranathan N, Ganguli S. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML 2015. [3] Feller W. On the theory of stochastic processes, with particular reference to applications. In Selected Papers I 2015 (pp. 769-798). Springer, Cham. [4] Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models. In NIPS 2020
2023-03-24 00:42:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 63, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8409140706062317, "perplexity": 471.3360359480043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00791.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/find-the-area-of-triangles-whose-vertices-are-a-1-2-b-2-4-c-0-0-application-of-determinants-area-of-a-triangle_165482
# Find the area of triangles whose vertices are A(− 1, 2), B(2, 4), C(0, 0) - Mathematics and Statistics Sum Find the area of triangles whose vertices are A(− 1, 2), B(2, 4), C(0, 0) #### Solution Here, A(x1, y1) ≡ A(–1, 2), B(x2, y2) ≡ B(2, 4), C(x3, y3) ≡ C(0, 0) Area of a triangle = 1/2|(x_1, y_1, 1),(x_2, y_2, 1),(x_3, y_3, 1)| ∴ A(ΔABC) = 1/2|(-1, 2, 1),(2, 4, 1),(0, 0, 1)| = 1/2[-1(4 - 0) - 2(2 - 0) + 1(0 - 0)] = 1/2(-4 - 4) = 1/2(- 8) = – 4 Since, area cannot be negative. ∴ A(ΔABC) = 4 sq. units Concept: Application of Determinants - Area of a Triangle Is there an error in this question or solution? #### APPEARS IN Balbharati Mathematics and Statistics 1 (Commerce) 11th Standard Maharashtra State Board Chapter 6 Determinants Miscellaneous Exercise 6 | Q 7. (i) | Page 95 Share
2021-04-22 11:52:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6690760850906372, "perplexity": 10531.192289760023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00036.warc.gz"}